00:00:00.001 Started by upstream project "autotest-per-patch" build number 122813 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.083 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.181 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.181 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.520 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.534 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.547 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:03.547 > git config core.sparsecheckout # timeout=10 00:00:03.559 > git read-tree -mu HEAD # timeout=10 00:00:03.575 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:03.592 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:03.592 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:03.707 [Pipeline] Start of Pipeline 00:00:03.721 [Pipeline] library 00:00:03.723 Loading library shm_lib@master 00:00:03.723 Library shm_lib@master is cached. Copying from home. 00:00:03.741 [Pipeline] node 00:00:03.749 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.750 [Pipeline] { 00:00:03.761 [Pipeline] catchError 00:00:03.763 [Pipeline] { 00:00:03.779 [Pipeline] wrap 00:00:03.790 [Pipeline] { 00:00:03.795 [Pipeline] stage 00:00:03.796 [Pipeline] { (Prologue) 00:00:04.011 [Pipeline] sh 00:00:04.297 + logger -p user.info -t JENKINS-CI 00:00:04.317 [Pipeline] echo 00:00:04.318 Node: WFP22 00:00:04.326 [Pipeline] sh 00:00:04.626 [Pipeline] setCustomBuildProperty 00:00:04.639 [Pipeline] echo 00:00:04.641 Cleanup processes 00:00:04.647 [Pipeline] sh 00:00:04.931 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.931 3282182 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.945 [Pipeline] sh 00:00:05.272 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.272 ++ grep -v 'sudo pgrep' 00:00:05.272 ++ awk '{print $1}' 00:00:05.272 + sudo kill -9 00:00:05.272 + true 00:00:05.285 [Pipeline] cleanWs 00:00:05.293 [WS-CLEANUP] Deleting project workspace... 00:00:05.293 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.298 [WS-CLEANUP] done 00:00:05.304 [Pipeline] setCustomBuildProperty 00:00:05.318 [Pipeline] sh 00:00:05.595 + sudo git config --global --replace-all safe.directory '*' 00:00:05.647 [Pipeline] nodesByLabel 00:00:05.648 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.654 [Pipeline] httpRequest 00:00:05.658 HttpMethod: GET 00:00:05.659 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.662 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.665 Response Code: HTTP/1.1 200 OK 00:00:05.665 Success: Status code 200 is in the accepted range: 200,404 00:00:05.666 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.221 [Pipeline] sh 00:00:06.503 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.526 [Pipeline] httpRequest 00:00:06.532 HttpMethod: GET 00:00:06.533 URL: http://10.211.164.101/packages/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:00:06.533 Sending request to url: http://10.211.164.101/packages/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:00:06.535 Response Code: HTTP/1.1 200 OK 00:00:06.536 Success: Status code 200 is in the accepted range: 200,404 00:00:06.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:00:21.374 [Pipeline] sh 00:00:21.660 + tar --no-same-owner -xf spdk_52939f252f2e182ba62a91f015fc30b8e463d7b0.tar.gz 00:00:24.210 [Pipeline] sh 00:00:24.493 + git -C spdk log --oneline -n5 00:00:24.493 52939f252 lib/blobfs: fix memory error for spdk_file_write 00:00:24.493 235c4c537 xnvme: change gitmodule-remote 00:00:24.493 bf8fa3b96 test/skipped_tests: update the list to current per-patch 00:00:24.493 e2d29d42b test/ftl: remove duplicated ftl_dirty_shutdown 00:00:24.493 7313180df test/ftl: replace FTL extended and nightly flags 00:00:24.505 [Pipeline] } 00:00:24.520 [Pipeline] // stage 00:00:24.529 [Pipeline] stage 00:00:24.530 [Pipeline] { (Prepare) 00:00:24.548 [Pipeline] writeFile 00:00:24.565 [Pipeline] sh 00:00:24.849 + logger -p user.info -t JENKINS-CI 00:00:24.863 [Pipeline] sh 00:00:25.147 + logger -p user.info -t JENKINS-CI 00:00:25.161 [Pipeline] sh 00:00:25.446 + cat autorun-spdk.conf 00:00:25.446 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.446 SPDK_TEST_NVMF=1 00:00:25.446 SPDK_TEST_NVME_CLI=1 00:00:25.446 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.446 SPDK_TEST_NVMF_NICS=e810 00:00:25.446 SPDK_TEST_VFIOUSER=1 00:00:25.446 SPDK_RUN_UBSAN=1 00:00:25.446 NET_TYPE=phy 00:00:25.453 RUN_NIGHTLY=0 00:00:25.457 [Pipeline] readFile 00:00:25.479 [Pipeline] withEnv 00:00:25.481 [Pipeline] { 00:00:25.491 [Pipeline] sh 00:00:25.770 + set -ex 00:00:25.770 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:25.770 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:25.770 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.770 ++ SPDK_TEST_NVMF=1 00:00:25.770 ++ SPDK_TEST_NVME_CLI=1 00:00:25.770 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.770 ++ SPDK_TEST_NVMF_NICS=e810 00:00:25.770 ++ SPDK_TEST_VFIOUSER=1 00:00:25.770 ++ SPDK_RUN_UBSAN=1 00:00:25.770 ++ NET_TYPE=phy 00:00:25.770 ++ RUN_NIGHTLY=0 00:00:25.770 + case $SPDK_TEST_NVMF_NICS in 00:00:25.770 + DRIVERS=ice 00:00:25.770 + [[ tcp == \r\d\m\a ]] 00:00:25.770 + [[ -n ice ]] 00:00:25.770 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:25.770 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:25.770 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:25.770 rmmod: ERROR: Module irdma is not currently loaded 00:00:25.770 rmmod: ERROR: Module i40iw is not currently loaded 00:00:25.770 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:25.770 + true 00:00:25.770 + for D in $DRIVERS 00:00:25.770 + sudo modprobe ice 00:00:25.770 + exit 0 00:00:25.779 [Pipeline] } 00:00:25.795 [Pipeline] // withEnv 00:00:25.799 [Pipeline] } 00:00:25.815 [Pipeline] // stage 00:00:25.824 [Pipeline] catchError 00:00:25.826 [Pipeline] { 00:00:25.839 [Pipeline] timeout 00:00:25.839 Timeout set to expire in 40 min 00:00:25.841 [Pipeline] { 00:00:25.856 [Pipeline] stage 00:00:25.858 [Pipeline] { (Tests) 00:00:25.875 [Pipeline] sh 00:00:26.158 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:26.159 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:26.159 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:26.159 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:26.159 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.159 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:26.159 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:26.159 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:26.159 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:26.159 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:26.159 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:26.159 + source /etc/os-release 00:00:26.159 ++ NAME='Fedora Linux' 00:00:26.159 ++ VERSION='38 (Cloud Edition)' 00:00:26.159 ++ ID=fedora 00:00:26.159 ++ VERSION_ID=38 00:00:26.159 ++ VERSION_CODENAME= 00:00:26.159 ++ PLATFORM_ID=platform:f38 00:00:26.159 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:26.159 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:26.159 ++ LOGO=fedora-logo-icon 00:00:26.159 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:26.159 ++ HOME_URL=https://fedoraproject.org/ 00:00:26.159 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:26.159 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:26.159 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:26.159 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:26.159 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:26.159 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:26.159 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:26.159 ++ SUPPORT_END=2024-05-14 00:00:26.159 ++ VARIANT='Cloud Edition' 00:00:26.159 ++ VARIANT_ID=cloud 00:00:26.159 + uname -a 00:00:26.159 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:26.159 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:28.696 Hugepages 00:00:28.696 node hugesize free / total 00:00:28.696 node0 1048576kB 0 / 0 00:00:28.696 node0 2048kB 0 / 0 00:00:28.696 node1 1048576kB 0 / 0 00:00:28.696 node1 2048kB 0 / 0 00:00:28.696 00:00:28.696 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:28.696 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:28.696 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:28.696 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:28.956 + rm -f /tmp/spdk-ld-path 00:00:28.956 + source autorun-spdk.conf 00:00:28.956 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.956 ++ SPDK_TEST_NVMF=1 00:00:28.956 ++ SPDK_TEST_NVME_CLI=1 00:00:28.956 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.956 ++ SPDK_TEST_NVMF_NICS=e810 00:00:28.956 ++ SPDK_TEST_VFIOUSER=1 00:00:28.956 ++ SPDK_RUN_UBSAN=1 00:00:28.956 ++ NET_TYPE=phy 00:00:28.956 ++ RUN_NIGHTLY=0 00:00:28.956 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:28.956 + [[ -n '' ]] 00:00:28.956 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:28.956 + for M in /var/spdk/build-*-manifest.txt 00:00:28.956 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:28.956 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:28.956 + for M in /var/spdk/build-*-manifest.txt 00:00:28.956 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:28.956 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:28.956 ++ uname 00:00:28.956 + [[ Linux == \L\i\n\u\x ]] 00:00:28.956 + sudo dmesg -T 00:00:28.956 + sudo dmesg --clear 00:00:28.956 + dmesg_pid=3283164 00:00:28.956 + [[ Fedora Linux == FreeBSD ]] 00:00:28.956 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.956 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.956 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:28.956 + [[ -x /usr/src/fio-static/fio ]] 00:00:28.956 + export FIO_BIN=/usr/src/fio-static/fio 00:00:28.956 + FIO_BIN=/usr/src/fio-static/fio 00:00:28.956 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:28.956 + sudo dmesg -Tw 00:00:28.956 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:28.956 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:28.956 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.956 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.956 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:28.956 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.956 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.956 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:28.956 Test configuration: 00:00:28.956 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.956 SPDK_TEST_NVMF=1 00:00:28.956 SPDK_TEST_NVME_CLI=1 00:00:28.956 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.956 SPDK_TEST_NVMF_NICS=e810 00:00:28.956 SPDK_TEST_VFIOUSER=1 00:00:28.956 SPDK_RUN_UBSAN=1 00:00:28.956 NET_TYPE=phy 00:00:28.956 RUN_NIGHTLY=0 23:42:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:28.956 23:42:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:28.956 23:42:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:28.956 23:42:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:28.956 23:42:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.956 23:42:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.956 23:42:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.956 23:42:29 -- paths/export.sh@5 -- $ export PATH 00:00:28.956 23:42:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.956 23:42:29 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:28.956 23:42:29 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:29.216 23:42:29 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715722949.XXXXXX 00:00:29.216 23:42:29 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715722949.wqPHbD 00:00:29.216 23:42:29 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:29.216 23:42:29 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:29.216 23:42:29 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:29.216 23:42:29 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:29.216 23:42:29 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:29.216 23:42:29 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:29.216 23:42:29 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:29.216 23:42:29 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.216 23:42:29 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:29.216 23:42:29 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:29.216 23:42:29 -- pm/common@17 -- $ local monitor 00:00:29.216 23:42:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.216 23:42:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.216 23:42:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.216 23:42:29 -- pm/common@21 -- $ date +%s 00:00:29.216 23:42:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.216 23:42:29 -- pm/common@21 -- $ date +%s 00:00:29.216 23:42:29 -- pm/common@21 -- $ date +%s 00:00:29.216 23:42:29 -- pm/common@25 -- $ sleep 1 00:00:29.216 23:42:29 -- pm/common@21 -- $ date +%s 00:00:29.216 23:42:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715722949 00:00:29.216 23:42:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715722949 00:00:29.216 23:42:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715722949 00:00:29.216 23:42:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715722949 00:00:29.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715722949_collect-cpu-temp.pm.log 00:00:29.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715722949_collect-vmstat.pm.log 00:00:29.216 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715722949_collect-cpu-load.pm.log 00:00:29.217 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715722949_collect-bmc-pm.bmc.pm.log 00:00:30.155 23:42:30 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:30.155 23:42:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:30.155 23:42:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:30.155 23:42:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.155 23:42:30 -- spdk/autobuild.sh@16 -- $ date -u 00:00:30.155 Tue May 14 09:42:30 PM UTC 2024 00:00:30.155 23:42:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:30.155 v24.05-pre-617-g52939f252 00:00:30.155 23:42:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:30.155 23:42:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:30.155 23:42:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:30.155 23:42:30 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:30.155 23:42:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:30.155 23:42:30 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.155 ************************************ 00:00:30.155 START TEST ubsan 00:00:30.155 ************************************ 00:00:30.155 23:42:30 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:30.155 using ubsan 00:00:30.155 00:00:30.155 real 0m0.000s 00:00:30.155 user 0m0.000s 00:00:30.155 sys 0m0.000s 00:00:30.155 23:42:30 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:30.155 23:42:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:30.155 ************************************ 00:00:30.155 END TEST ubsan 00:00:30.155 ************************************ 00:00:30.155 23:42:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:30.155 23:42:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:30.155 23:42:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:30.155 23:42:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:30.414 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:30.414 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:30.673 Using 'verbs' RDMA provider 00:00:43.823 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.717 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.717 Creating mk/config.mk...done. 00:00:58.717 Creating mk/cc.flags.mk...done. 00:00:58.717 Type 'make' to build. 00:00:58.717 23:42:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:00:58.717 23:42:57 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:58.717 23:42:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:58.717 23:42:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.717 ************************************ 00:00:58.717 START TEST make 00:00:58.717 ************************************ 00:00:58.717 23:42:57 make -- common/autotest_common.sh@1121 -- $ make -j112 00:00:58.717 make[1]: Nothing to be done for 'all'. 00:00:58.975 The Meson build system 00:00:58.975 Version: 1.3.1 00:00:58.975 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:58.975 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.975 Build type: native build 00:00:58.975 Project name: libvfio-user 00:00:58.975 Project version: 0.0.1 00:00:58.975 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:58.975 C linker for the host machine: cc ld.bfd 2.39-16 00:00:58.975 Host machine cpu family: x86_64 00:00:58.975 Host machine cpu: x86_64 00:00:58.975 Run-time dependency threads found: YES 00:00:58.975 Library dl found: YES 00:00:58.975 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:58.975 Run-time dependency json-c found: YES 0.17 00:00:58.975 Run-time dependency cmocka found: YES 1.1.7 00:00:58.975 Program pytest-3 found: NO 00:00:58.975 Program flake8 found: NO 00:00:58.975 Program misspell-fixer found: NO 00:00:58.975 Program restructuredtext-lint found: NO 00:00:58.975 Program valgrind found: YES (/usr/bin/valgrind) 00:00:58.975 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:58.975 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:58.975 Compiler for C supports arguments -Wwrite-strings: YES 00:00:58.975 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.975 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:58.975 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:58.975 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.975 Build targets in project: 8 00:00:58.975 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:58.975 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:58.975 00:00:58.975 libvfio-user 0.0.1 00:00:58.975 00:00:58.975 User defined options 00:00:58.975 buildtype : debug 00:00:58.975 default_library: shared 00:00:58.975 libdir : /usr/local/lib 00:00:58.975 00:00:58.975 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:59.233 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.492 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:59.492 [2/37] Compiling C object samples/null.p/null.c.o 00:00:59.492 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:59.492 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:59.492 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:59.492 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:59.492 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:59.492 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:59.492 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:59.492 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:59.492 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:59.492 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:59.492 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:59.492 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:59.492 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:59.492 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:59.492 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:59.492 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:59.492 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:59.492 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:59.492 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:59.492 [22/37] Compiling C object samples/server.p/server.c.o 00:00:59.492 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:59.492 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:59.492 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:59.492 [26/37] Compiling C object samples/client.p/client.c.o 00:00:59.492 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:59.492 [28/37] Linking target samples/client 00:00:59.492 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:59.751 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:59.751 [31/37] Linking target test/unit_tests 00:00:59.751 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:59.751 [33/37] Linking target samples/lspci 00:00:59.751 [34/37] Linking target samples/server 00:00:59.751 [35/37] Linking target samples/null 00:00:59.751 [36/37] Linking target samples/gpio-pci-idio-16 00:00:59.751 [37/37] Linking target samples/shadow_ioeventfd_server 00:00:59.751 INFO: autodetecting backend as ninja 00:00:59.751 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.751 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.010 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.010 ninja: no work to do. 00:01:05.350 The Meson build system 00:01:05.350 Version: 1.3.1 00:01:05.350 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:05.350 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:05.350 Build type: native build 00:01:05.350 Program cat found: YES (/usr/bin/cat) 00:01:05.350 Project name: DPDK 00:01:05.350 Project version: 23.11.0 00:01:05.350 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:05.350 C linker for the host machine: cc ld.bfd 2.39-16 00:01:05.350 Host machine cpu family: x86_64 00:01:05.350 Host machine cpu: x86_64 00:01:05.350 Message: ## Building in Developer Mode ## 00:01:05.350 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:05.350 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:05.350 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:05.350 Program python3 found: YES (/usr/bin/python3) 00:01:05.350 Program cat found: YES (/usr/bin/cat) 00:01:05.350 Compiler for C supports arguments -march=native: YES 00:01:05.350 Checking for size of "void *" : 8 00:01:05.350 Checking for size of "void *" : 8 (cached) 00:01:05.350 Library m found: YES 00:01:05.350 Library numa found: YES 00:01:05.350 Has header "numaif.h" : YES 00:01:05.350 Library fdt found: NO 00:01:05.350 Library execinfo found: NO 00:01:05.350 Has header "execinfo.h" : YES 00:01:05.350 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:05.351 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:05.351 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:05.351 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:05.351 Run-time dependency openssl found: YES 3.0.9 00:01:05.351 Run-time dependency libpcap found: YES 1.10.4 00:01:05.351 Has header "pcap.h" with dependency libpcap: YES 00:01:05.351 Compiler for C supports arguments -Wcast-qual: YES 00:01:05.351 Compiler for C supports arguments -Wdeprecated: YES 00:01:05.351 Compiler for C supports arguments -Wformat: YES 00:01:05.351 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:05.351 Compiler for C supports arguments -Wformat-security: NO 00:01:05.351 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:05.351 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:05.351 Compiler for C supports arguments -Wnested-externs: YES 00:01:05.351 Compiler for C supports arguments -Wold-style-definition: YES 00:01:05.351 Compiler for C supports arguments -Wpointer-arith: YES 00:01:05.351 Compiler for C supports arguments -Wsign-compare: YES 00:01:05.351 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:05.351 Compiler for C supports arguments -Wundef: YES 00:01:05.351 Compiler for C supports arguments -Wwrite-strings: YES 00:01:05.351 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:05.351 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:05.351 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:05.351 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:05.351 Program objdump found: YES (/usr/bin/objdump) 00:01:05.351 Compiler for C supports arguments -mavx512f: YES 00:01:05.351 Checking if "AVX512 checking" compiles: YES 00:01:05.351 Fetching value of define "__SSE4_2__" : 1 00:01:05.351 Fetching value of define "__AES__" : 1 00:01:05.351 Fetching value of define "__AVX__" : 1 00:01:05.351 Fetching value of define "__AVX2__" : 1 00:01:05.351 Fetching value of define "__AVX512BW__" : 1 00:01:05.351 Fetching value of define "__AVX512CD__" : 1 00:01:05.351 Fetching value of define "__AVX512DQ__" : 1 00:01:05.351 Fetching value of define "__AVX512F__" : 1 00:01:05.351 Fetching value of define "__AVX512VL__" : 1 00:01:05.351 Fetching value of define "__PCLMUL__" : 1 00:01:05.351 Fetching value of define "__RDRND__" : 1 00:01:05.351 Fetching value of define "__RDSEED__" : 1 00:01:05.351 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:05.351 Fetching value of define "__znver1__" : (undefined) 00:01:05.351 Fetching value of define "__znver2__" : (undefined) 00:01:05.351 Fetching value of define "__znver3__" : (undefined) 00:01:05.351 Fetching value of define "__znver4__" : (undefined) 00:01:05.351 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:05.351 Message: lib/log: Defining dependency "log" 00:01:05.351 Message: lib/kvargs: Defining dependency "kvargs" 00:01:05.351 Message: lib/telemetry: Defining dependency "telemetry" 00:01:05.351 Checking for function "getentropy" : NO 00:01:05.351 Message: lib/eal: Defining dependency "eal" 00:01:05.351 Message: lib/ring: Defining dependency "ring" 00:01:05.351 Message: lib/rcu: Defining dependency "rcu" 00:01:05.351 Message: lib/mempool: Defining dependency "mempool" 00:01:05.351 Message: lib/mbuf: Defining dependency "mbuf" 00:01:05.351 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:05.351 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.351 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:05.351 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:05.351 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:05.351 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:05.351 Compiler for C supports arguments -mpclmul: YES 00:01:05.351 Compiler for C supports arguments -maes: YES 00:01:05.351 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:05.351 Compiler for C supports arguments -mavx512bw: YES 00:01:05.351 Compiler for C supports arguments -mavx512dq: YES 00:01:05.351 Compiler for C supports arguments -mavx512vl: YES 00:01:05.351 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:05.351 Compiler for C supports arguments -mavx2: YES 00:01:05.351 Compiler for C supports arguments -mavx: YES 00:01:05.351 Message: lib/net: Defining dependency "net" 00:01:05.351 Message: lib/meter: Defining dependency "meter" 00:01:05.351 Message: lib/ethdev: Defining dependency "ethdev" 00:01:05.351 Message: lib/pci: Defining dependency "pci" 00:01:05.351 Message: lib/cmdline: Defining dependency "cmdline" 00:01:05.351 Message: lib/hash: Defining dependency "hash" 00:01:05.351 Message: lib/timer: Defining dependency "timer" 00:01:05.351 Message: lib/compressdev: Defining dependency "compressdev" 00:01:05.351 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:05.351 Message: lib/dmadev: Defining dependency "dmadev" 00:01:05.351 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:05.351 Message: lib/power: Defining dependency "power" 00:01:05.351 Message: lib/reorder: Defining dependency "reorder" 00:01:05.351 Message: lib/security: Defining dependency "security" 00:01:05.351 Has header "linux/userfaultfd.h" : YES 00:01:05.351 Has header "linux/vduse.h" : YES 00:01:05.351 Message: lib/vhost: Defining dependency "vhost" 00:01:05.351 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:05.351 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:05.351 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:05.351 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:05.351 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:05.351 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:05.351 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:05.351 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:05.351 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:05.351 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:05.351 Program doxygen found: YES (/usr/bin/doxygen) 00:01:05.351 Configuring doxy-api-html.conf using configuration 00:01:05.351 Configuring doxy-api-man.conf using configuration 00:01:05.351 Program mandb found: YES (/usr/bin/mandb) 00:01:05.351 Program sphinx-build found: NO 00:01:05.351 Configuring rte_build_config.h using configuration 00:01:05.351 Message: 00:01:05.351 ================= 00:01:05.351 Applications Enabled 00:01:05.351 ================= 00:01:05.351 00:01:05.351 apps: 00:01:05.351 00:01:05.351 00:01:05.351 Message: 00:01:05.351 ================= 00:01:05.351 Libraries Enabled 00:01:05.351 ================= 00:01:05.351 00:01:05.351 libs: 00:01:05.351 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:05.351 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:05.351 cryptodev, dmadev, power, reorder, security, vhost, 00:01:05.351 00:01:05.351 Message: 00:01:05.351 =============== 00:01:05.351 Drivers Enabled 00:01:05.351 =============== 00:01:05.351 00:01:05.351 common: 00:01:05.351 00:01:05.351 bus: 00:01:05.351 pci, vdev, 00:01:05.351 mempool: 00:01:05.351 ring, 00:01:05.351 dma: 00:01:05.351 00:01:05.351 net: 00:01:05.351 00:01:05.351 crypto: 00:01:05.351 00:01:05.351 compress: 00:01:05.351 00:01:05.351 vdpa: 00:01:05.351 00:01:05.351 00:01:05.351 Message: 00:01:05.351 ================= 00:01:05.351 Content Skipped 00:01:05.351 ================= 00:01:05.351 00:01:05.351 apps: 00:01:05.351 dumpcap: explicitly disabled via build config 00:01:05.351 graph: explicitly disabled via build config 00:01:05.351 pdump: explicitly disabled via build config 00:01:05.351 proc-info: explicitly disabled via build config 00:01:05.351 test-acl: explicitly disabled via build config 00:01:05.351 test-bbdev: explicitly disabled via build config 00:01:05.351 test-cmdline: explicitly disabled via build config 00:01:05.351 test-compress-perf: explicitly disabled via build config 00:01:05.351 test-crypto-perf: explicitly disabled via build config 00:01:05.351 test-dma-perf: explicitly disabled via build config 00:01:05.351 test-eventdev: explicitly disabled via build config 00:01:05.351 test-fib: explicitly disabled via build config 00:01:05.351 test-flow-perf: explicitly disabled via build config 00:01:05.351 test-gpudev: explicitly disabled via build config 00:01:05.351 test-mldev: explicitly disabled via build config 00:01:05.351 test-pipeline: explicitly disabled via build config 00:01:05.351 test-pmd: explicitly disabled via build config 00:01:05.351 test-regex: explicitly disabled via build config 00:01:05.351 test-sad: explicitly disabled via build config 00:01:05.351 test-security-perf: explicitly disabled via build config 00:01:05.351 00:01:05.351 libs: 00:01:05.351 metrics: explicitly disabled via build config 00:01:05.351 acl: explicitly disabled via build config 00:01:05.351 bbdev: explicitly disabled via build config 00:01:05.351 bitratestats: explicitly disabled via build config 00:01:05.351 bpf: explicitly disabled via build config 00:01:05.351 cfgfile: explicitly disabled via build config 00:01:05.351 distributor: explicitly disabled via build config 00:01:05.351 efd: explicitly disabled via build config 00:01:05.351 eventdev: explicitly disabled via build config 00:01:05.351 dispatcher: explicitly disabled via build config 00:01:05.351 gpudev: explicitly disabled via build config 00:01:05.351 gro: explicitly disabled via build config 00:01:05.351 gso: explicitly disabled via build config 00:01:05.351 ip_frag: explicitly disabled via build config 00:01:05.351 jobstats: explicitly disabled via build config 00:01:05.351 latencystats: explicitly disabled via build config 00:01:05.351 lpm: explicitly disabled via build config 00:01:05.351 member: explicitly disabled via build config 00:01:05.351 pcapng: explicitly disabled via build config 00:01:05.351 rawdev: explicitly disabled via build config 00:01:05.351 regexdev: explicitly disabled via build config 00:01:05.351 mldev: explicitly disabled via build config 00:01:05.351 rib: explicitly disabled via build config 00:01:05.351 sched: explicitly disabled via build config 00:01:05.351 stack: explicitly disabled via build config 00:01:05.351 ipsec: explicitly disabled via build config 00:01:05.351 pdcp: explicitly disabled via build config 00:01:05.351 fib: explicitly disabled via build config 00:01:05.351 port: explicitly disabled via build config 00:01:05.351 pdump: explicitly disabled via build config 00:01:05.351 table: explicitly disabled via build config 00:01:05.351 pipeline: explicitly disabled via build config 00:01:05.351 graph: explicitly disabled via build config 00:01:05.351 node: explicitly disabled via build config 00:01:05.351 00:01:05.351 drivers: 00:01:05.351 common/cpt: not in enabled drivers build config 00:01:05.352 common/dpaax: not in enabled drivers build config 00:01:05.352 common/iavf: not in enabled drivers build config 00:01:05.352 common/idpf: not in enabled drivers build config 00:01:05.352 common/mvep: not in enabled drivers build config 00:01:05.352 common/octeontx: not in enabled drivers build config 00:01:05.352 bus/auxiliary: not in enabled drivers build config 00:01:05.352 bus/cdx: not in enabled drivers build config 00:01:05.352 bus/dpaa: not in enabled drivers build config 00:01:05.352 bus/fslmc: not in enabled drivers build config 00:01:05.352 bus/ifpga: not in enabled drivers build config 00:01:05.352 bus/platform: not in enabled drivers build config 00:01:05.352 bus/vmbus: not in enabled drivers build config 00:01:05.352 common/cnxk: not in enabled drivers build config 00:01:05.352 common/mlx5: not in enabled drivers build config 00:01:05.352 common/nfp: not in enabled drivers build config 00:01:05.352 common/qat: not in enabled drivers build config 00:01:05.352 common/sfc_efx: not in enabled drivers build config 00:01:05.352 mempool/bucket: not in enabled drivers build config 00:01:05.352 mempool/cnxk: not in enabled drivers build config 00:01:05.352 mempool/dpaa: not in enabled drivers build config 00:01:05.352 mempool/dpaa2: not in enabled drivers build config 00:01:05.352 mempool/octeontx: not in enabled drivers build config 00:01:05.352 mempool/stack: not in enabled drivers build config 00:01:05.352 dma/cnxk: not in enabled drivers build config 00:01:05.352 dma/dpaa: not in enabled drivers build config 00:01:05.352 dma/dpaa2: not in enabled drivers build config 00:01:05.352 dma/hisilicon: not in enabled drivers build config 00:01:05.352 dma/idxd: not in enabled drivers build config 00:01:05.352 dma/ioat: not in enabled drivers build config 00:01:05.352 dma/skeleton: not in enabled drivers build config 00:01:05.352 net/af_packet: not in enabled drivers build config 00:01:05.352 net/af_xdp: not in enabled drivers build config 00:01:05.352 net/ark: not in enabled drivers build config 00:01:05.352 net/atlantic: not in enabled drivers build config 00:01:05.352 net/avp: not in enabled drivers build config 00:01:05.352 net/axgbe: not in enabled drivers build config 00:01:05.352 net/bnx2x: not in enabled drivers build config 00:01:05.352 net/bnxt: not in enabled drivers build config 00:01:05.352 net/bonding: not in enabled drivers build config 00:01:05.352 net/cnxk: not in enabled drivers build config 00:01:05.352 net/cpfl: not in enabled drivers build config 00:01:05.352 net/cxgbe: not in enabled drivers build config 00:01:05.352 net/dpaa: not in enabled drivers build config 00:01:05.352 net/dpaa2: not in enabled drivers build config 00:01:05.352 net/e1000: not in enabled drivers build config 00:01:05.352 net/ena: not in enabled drivers build config 00:01:05.352 net/enetc: not in enabled drivers build config 00:01:05.352 net/enetfec: not in enabled drivers build config 00:01:05.352 net/enic: not in enabled drivers build config 00:01:05.352 net/failsafe: not in enabled drivers build config 00:01:05.352 net/fm10k: not in enabled drivers build config 00:01:05.352 net/gve: not in enabled drivers build config 00:01:05.352 net/hinic: not in enabled drivers build config 00:01:05.352 net/hns3: not in enabled drivers build config 00:01:05.352 net/i40e: not in enabled drivers build config 00:01:05.352 net/iavf: not in enabled drivers build config 00:01:05.352 net/ice: not in enabled drivers build config 00:01:05.352 net/idpf: not in enabled drivers build config 00:01:05.352 net/igc: not in enabled drivers build config 00:01:05.352 net/ionic: not in enabled drivers build config 00:01:05.352 net/ipn3ke: not in enabled drivers build config 00:01:05.352 net/ixgbe: not in enabled drivers build config 00:01:05.352 net/mana: not in enabled drivers build config 00:01:05.352 net/memif: not in enabled drivers build config 00:01:05.352 net/mlx4: not in enabled drivers build config 00:01:05.352 net/mlx5: not in enabled drivers build config 00:01:05.352 net/mvneta: not in enabled drivers build config 00:01:05.352 net/mvpp2: not in enabled drivers build config 00:01:05.352 net/netvsc: not in enabled drivers build config 00:01:05.352 net/nfb: not in enabled drivers build config 00:01:05.352 net/nfp: not in enabled drivers build config 00:01:05.352 net/ngbe: not in enabled drivers build config 00:01:05.352 net/null: not in enabled drivers build config 00:01:05.352 net/octeontx: not in enabled drivers build config 00:01:05.352 net/octeon_ep: not in enabled drivers build config 00:01:05.352 net/pcap: not in enabled drivers build config 00:01:05.352 net/pfe: not in enabled drivers build config 00:01:05.352 net/qede: not in enabled drivers build config 00:01:05.352 net/ring: not in enabled drivers build config 00:01:05.352 net/sfc: not in enabled drivers build config 00:01:05.352 net/softnic: not in enabled drivers build config 00:01:05.352 net/tap: not in enabled drivers build config 00:01:05.352 net/thunderx: not in enabled drivers build config 00:01:05.352 net/txgbe: not in enabled drivers build config 00:01:05.352 net/vdev_netvsc: not in enabled drivers build config 00:01:05.352 net/vhost: not in enabled drivers build config 00:01:05.352 net/virtio: not in enabled drivers build config 00:01:05.352 net/vmxnet3: not in enabled drivers build config 00:01:05.352 raw/*: missing internal dependency, "rawdev" 00:01:05.352 crypto/armv8: not in enabled drivers build config 00:01:05.352 crypto/bcmfs: not in enabled drivers build config 00:01:05.352 crypto/caam_jr: not in enabled drivers build config 00:01:05.352 crypto/ccp: not in enabled drivers build config 00:01:05.352 crypto/cnxk: not in enabled drivers build config 00:01:05.352 crypto/dpaa_sec: not in enabled drivers build config 00:01:05.352 crypto/dpaa2_sec: not in enabled drivers build config 00:01:05.352 crypto/ipsec_mb: not in enabled drivers build config 00:01:05.352 crypto/mlx5: not in enabled drivers build config 00:01:05.352 crypto/mvsam: not in enabled drivers build config 00:01:05.352 crypto/nitrox: not in enabled drivers build config 00:01:05.352 crypto/null: not in enabled drivers build config 00:01:05.352 crypto/octeontx: not in enabled drivers build config 00:01:05.352 crypto/openssl: not in enabled drivers build config 00:01:05.352 crypto/scheduler: not in enabled drivers build config 00:01:05.352 crypto/uadk: not in enabled drivers build config 00:01:05.352 crypto/virtio: not in enabled drivers build config 00:01:05.352 compress/isal: not in enabled drivers build config 00:01:05.352 compress/mlx5: not in enabled drivers build config 00:01:05.352 compress/octeontx: not in enabled drivers build config 00:01:05.352 compress/zlib: not in enabled drivers build config 00:01:05.352 regex/*: missing internal dependency, "regexdev" 00:01:05.352 ml/*: missing internal dependency, "mldev" 00:01:05.352 vdpa/ifc: not in enabled drivers build config 00:01:05.352 vdpa/mlx5: not in enabled drivers build config 00:01:05.352 vdpa/nfp: not in enabled drivers build config 00:01:05.352 vdpa/sfc: not in enabled drivers build config 00:01:05.352 event/*: missing internal dependency, "eventdev" 00:01:05.352 baseband/*: missing internal dependency, "bbdev" 00:01:05.352 gpu/*: missing internal dependency, "gpudev" 00:01:05.352 00:01:05.352 00:01:05.352 Build targets in project: 85 00:01:05.352 00:01:05.352 DPDK 23.11.0 00:01:05.352 00:01:05.352 User defined options 00:01:05.352 buildtype : debug 00:01:05.352 default_library : shared 00:01:05.352 libdir : lib 00:01:05.352 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.352 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:05.352 c_link_args : 00:01:05.352 cpu_instruction_set: native 00:01:05.352 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:05.352 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:05.352 enable_docs : false 00:01:05.352 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:05.352 enable_kmods : false 00:01:05.352 tests : false 00:01:05.352 00:01:05.352 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:05.619 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:05.619 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:05.619 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:05.884 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:05.884 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:05.884 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:05.884 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:05.884 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:05.884 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:05.884 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.884 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.884 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:05.884 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:05.884 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:05.884 [14/265] Linking static target lib/librte_kvargs.a 00:01:05.884 [15/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.884 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:05.884 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.884 [18/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:05.884 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:05.884 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:05.884 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:05.884 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:05.884 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:05.884 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:05.884 [25/265] Linking static target lib/librte_log.a 00:01:05.884 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:05.884 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:05.884 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:05.884 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:05.885 [30/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:05.885 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:05.885 [32/265] Linking static target lib/librte_pci.a 00:01:05.885 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.147 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.147 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.147 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.147 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.147 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:06.147 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.147 [40/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.147 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:06.410 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:06.410 [43/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:06.410 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:06.410 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:06.410 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:06.410 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:06.410 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:06.410 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:06.410 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:06.410 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:06.410 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:06.410 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:06.410 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:06.410 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:06.410 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:06.410 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:06.410 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.410 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.410 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:06.410 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:06.410 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:06.410 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:06.410 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.410 [65/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.410 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:06.410 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:06.410 [68/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.410 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:06.410 [70/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.410 [71/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.410 [72/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.410 [73/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.410 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:06.410 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:06.410 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:06.410 [77/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.410 [78/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.410 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:06.410 [80/265] Linking static target lib/librte_meter.a 00:01:06.410 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:06.410 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:06.410 [83/265] Linking static target lib/librte_ring.a 00:01:06.410 [84/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.410 [85/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.410 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.410 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:06.410 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.410 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.410 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:06.410 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.410 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.410 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.410 [94/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:06.410 [95/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:06.410 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.410 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:06.410 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:06.410 [99/265] Linking static target lib/librte_telemetry.a 00:01:06.410 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.410 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:06.410 [102/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:06.410 [103/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:06.410 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:06.410 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:06.410 [106/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:06.410 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.410 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:06.410 [109/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:06.410 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.410 [111/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:06.410 [112/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:06.410 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:06.410 [114/265] Linking static target lib/librte_net.a 00:01:06.410 [115/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:06.410 [116/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:06.410 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:06.410 [118/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:06.410 [119/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:06.410 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.410 [121/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.410 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:06.410 [123/265] Linking static target lib/librte_cmdline.a 00:01:06.410 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.410 [125/265] Linking static target lib/librte_timer.a 00:01:06.410 [126/265] Linking static target lib/librte_mempool.a 00:01:06.410 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:06.410 [128/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:06.410 [129/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:06.410 [130/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:06.410 [131/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:06.410 [132/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:06.410 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:06.410 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:06.410 [135/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:06.410 [136/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.410 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:06.410 [138/265] Linking static target lib/librte_rcu.a 00:01:06.410 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:06.410 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:06.410 [141/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:06.410 [142/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:06.669 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:06.669 [144/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:06.669 [145/265] Linking static target lib/librte_dmadev.a 00:01:06.669 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:06.669 [147/265] Linking static target lib/librte_eal.a 00:01:06.669 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.669 [149/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:06.669 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:06.669 [151/265] Linking static target lib/librte_compressdev.a 00:01:06.669 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:06.669 [153/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:06.669 [154/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:06.669 [155/265] Linking static target lib/librte_reorder.a 00:01:06.669 [156/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:06.669 [157/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.669 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:06.669 [159/265] Linking static target lib/librte_mbuf.a 00:01:06.669 [160/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.669 [161/265] Linking target lib/librte_log.so.24.0 00:01:06.669 [162/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:06.669 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:06.669 [164/265] Linking static target lib/librte_security.a 00:01:06.669 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:06.669 [166/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.669 [167/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:06.669 [168/265] Linking static target lib/librte_power.a 00:01:06.669 [169/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:06.669 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:06.669 [171/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:06.669 [172/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:06.669 [173/265] Linking static target lib/librte_hash.a 00:01:06.669 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:06.669 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:06.669 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:06.928 [177/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:06.928 [178/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:06.928 [179/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.928 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:06.928 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:06.928 [182/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:06.928 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:06.928 [184/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:06.928 [185/265] Linking target lib/librte_kvargs.so.24.0 00:01:06.928 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:06.928 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:06.928 [188/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:06.928 [189/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.928 [190/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.928 [191/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.928 [192/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:06.928 [193/265] Linking static target drivers/librte_bus_vdev.a 00:01:06.928 [194/265] Linking static target lib/librte_cryptodev.a 00:01:06.928 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:06.928 [196/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.928 [197/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:06.928 [198/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:06.928 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.928 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:06.928 [201/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:06.928 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:06.928 [203/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:06.928 [204/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:06.928 [205/265] Linking static target drivers/librte_mempool_ring.a 00:01:06.928 [206/265] Linking target lib/librte_telemetry.so.24.0 00:01:07.186 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:07.186 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:07.186 [209/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.186 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:07.186 [211/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.186 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:07.186 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:07.186 [214/265] Linking static target lib/librte_ethdev.a 00:01:07.445 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.445 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.445 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.445 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.445 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:07.445 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.704 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.704 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.704 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.963 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.531 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:08.531 [226/265] Linking static target lib/librte_vhost.a 00:01:09.100 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.005 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.281 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.820 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.820 [231/265] Linking target lib/librte_eal.so.24.0 00:01:19.080 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:19.080 [233/265] Linking target lib/librte_timer.so.24.0 00:01:19.080 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:19.080 [235/265] Linking target lib/librte_ring.so.24.0 00:01:19.080 [236/265] Linking target lib/librte_meter.so.24.0 00:01:19.080 [237/265] Linking target lib/librte_pci.so.24.0 00:01:19.080 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:19.340 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:19.340 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:19.340 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:19.340 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:19.340 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:19.340 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:19.340 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:19.340 [246/265] Linking target lib/librte_mempool.so.24.0 00:01:19.340 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:19.340 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:19.599 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:19.599 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:19.599 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:19.599 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:19.599 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:19.599 [254/265] Linking target lib/librte_net.so.24.0 00:01:19.599 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:19.859 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:19.859 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:19.859 [258/265] Linking target lib/librte_hash.so.24.0 00:01:19.859 [259/265] Linking target lib/librte_security.so.24.0 00:01:19.859 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:19.859 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:20.118 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:20.118 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:20.118 [264/265] Linking target lib/librte_power.so.24.0 00:01:20.118 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:20.118 INFO: autodetecting backend as ninja 00:01:20.118 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:21.544 CC lib/ut_mock/mock.o 00:01:21.544 CC lib/log/log.o 00:01:21.544 CC lib/log/log_flags.o 00:01:21.544 CC lib/log/log_deprecated.o 00:01:21.544 CC lib/ut/ut.o 00:01:21.544 LIB libspdk_ut_mock.a 00:01:21.544 SO libspdk_ut_mock.so.6.0 00:01:21.544 LIB libspdk_log.a 00:01:21.544 LIB libspdk_ut.a 00:01:21.544 SO libspdk_log.so.7.0 00:01:21.544 SO libspdk_ut.so.2.0 00:01:21.544 SYMLINK libspdk_ut_mock.so 00:01:21.544 SYMLINK libspdk_ut.so 00:01:21.544 SYMLINK libspdk_log.so 00:01:21.803 CXX lib/trace_parser/trace.o 00:01:21.803 CC lib/util/base64.o 00:01:21.803 CC lib/util/bit_array.o 00:01:21.803 CC lib/util/cpuset.o 00:01:21.803 CC lib/util/crc16.o 00:01:21.803 CC lib/util/crc32.o 00:01:21.803 CC lib/util/crc32c.o 00:01:21.803 CC lib/util/crc32_ieee.o 00:01:21.803 CC lib/util/crc64.o 00:01:21.803 CC lib/util/dif.o 00:01:21.803 CC lib/util/fd.o 00:01:21.803 CC lib/ioat/ioat.o 00:01:21.803 CC lib/util/file.o 00:01:21.803 CC lib/dma/dma.o 00:01:21.803 CC lib/util/hexlify.o 00:01:21.803 CC lib/util/iov.o 00:01:21.803 CC lib/util/math.o 00:01:21.803 CC lib/util/pipe.o 00:01:21.803 CC lib/util/strerror_tls.o 00:01:21.803 CC lib/util/string.o 00:01:21.803 CC lib/util/xor.o 00:01:21.803 CC lib/util/uuid.o 00:01:21.803 CC lib/util/fd_group.o 00:01:21.803 CC lib/util/zipf.o 00:01:22.061 CC lib/vfio_user/host/vfio_user_pci.o 00:01:22.061 CC lib/vfio_user/host/vfio_user.o 00:01:22.061 LIB libspdk_dma.a 00:01:22.061 SO libspdk_dma.so.4.0 00:01:22.061 LIB libspdk_ioat.a 00:01:22.061 SO libspdk_ioat.so.7.0 00:01:22.061 SYMLINK libspdk_dma.so 00:01:22.321 LIB libspdk_vfio_user.a 00:01:22.321 SYMLINK libspdk_ioat.so 00:01:22.321 SO libspdk_vfio_user.so.5.0 00:01:22.321 LIB libspdk_util.a 00:01:22.321 SYMLINK libspdk_vfio_user.so 00:01:22.321 SO libspdk_util.so.9.0 00:01:22.580 SYMLINK libspdk_util.so 00:01:22.580 LIB libspdk_trace_parser.a 00:01:22.580 SO libspdk_trace_parser.so.5.0 00:01:22.580 SYMLINK libspdk_trace_parser.so 00:01:22.838 CC lib/rdma/common.o 00:01:22.838 CC lib/rdma/rdma_verbs.o 00:01:22.838 CC lib/conf/conf.o 00:01:22.838 CC lib/env_dpdk/env.o 00:01:22.838 CC lib/vmd/vmd.o 00:01:22.838 CC lib/env_dpdk/memory.o 00:01:22.838 CC lib/vmd/led.o 00:01:22.838 CC lib/env_dpdk/pci.o 00:01:22.838 CC lib/env_dpdk/init.o 00:01:22.838 CC lib/env_dpdk/pci_ioat.o 00:01:22.838 CC lib/env_dpdk/threads.o 00:01:22.838 CC lib/env_dpdk/pci_vmd.o 00:01:22.838 CC lib/env_dpdk/pci_virtio.o 00:01:22.838 CC lib/json/json_parse.o 00:01:22.838 CC lib/json/json_util.o 00:01:22.838 CC lib/env_dpdk/pci_idxd.o 00:01:22.838 CC lib/env_dpdk/sigbus_handler.o 00:01:22.838 CC lib/json/json_write.o 00:01:22.838 CC lib/env_dpdk/pci_event.o 00:01:22.838 CC lib/env_dpdk/pci_dpdk.o 00:01:22.838 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:22.838 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:22.838 CC lib/idxd/idxd.o 00:01:22.838 CC lib/idxd/idxd_user.o 00:01:23.096 LIB libspdk_conf.a 00:01:23.096 SO libspdk_conf.so.6.0 00:01:23.096 LIB libspdk_json.a 00:01:23.096 LIB libspdk_rdma.a 00:01:23.096 SO libspdk_rdma.so.6.0 00:01:23.096 SO libspdk_json.so.6.0 00:01:23.096 SYMLINK libspdk_conf.so 00:01:23.096 SYMLINK libspdk_rdma.so 00:01:23.096 SYMLINK libspdk_json.so 00:01:23.354 LIB libspdk_idxd.a 00:01:23.354 SO libspdk_idxd.so.12.0 00:01:23.354 LIB libspdk_vmd.a 00:01:23.354 SO libspdk_vmd.so.6.0 00:01:23.354 SYMLINK libspdk_idxd.so 00:01:23.354 SYMLINK libspdk_vmd.so 00:01:23.613 CC lib/jsonrpc/jsonrpc_server.o 00:01:23.613 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:23.613 CC lib/jsonrpc/jsonrpc_client.o 00:01:23.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:23.873 LIB libspdk_jsonrpc.a 00:01:23.873 SO libspdk_jsonrpc.so.6.0 00:01:23.873 LIB libspdk_env_dpdk.a 00:01:23.873 SYMLINK libspdk_jsonrpc.so 00:01:23.873 SO libspdk_env_dpdk.so.14.0 00:01:24.133 SYMLINK libspdk_env_dpdk.so 00:01:24.133 CC lib/rpc/rpc.o 00:01:24.392 LIB libspdk_rpc.a 00:01:24.392 SO libspdk_rpc.so.6.0 00:01:24.392 SYMLINK libspdk_rpc.so 00:01:24.960 CC lib/notify/notify.o 00:01:24.960 CC lib/notify/notify_rpc.o 00:01:24.960 CC lib/trace/trace_flags.o 00:01:24.960 CC lib/trace/trace.o 00:01:24.960 CC lib/trace/trace_rpc.o 00:01:24.960 CC lib/keyring/keyring.o 00:01:24.960 CC lib/keyring/keyring_rpc.o 00:01:24.960 LIB libspdk_notify.a 00:01:24.960 SO libspdk_notify.so.6.0 00:01:24.960 LIB libspdk_trace.a 00:01:24.960 LIB libspdk_keyring.a 00:01:24.960 SYMLINK libspdk_notify.so 00:01:24.960 SO libspdk_trace.so.10.0 00:01:25.219 SO libspdk_keyring.so.1.0 00:01:25.219 SYMLINK libspdk_trace.so 00:01:25.219 SYMLINK libspdk_keyring.so 00:01:25.479 CC lib/sock/sock.o 00:01:25.479 CC lib/sock/sock_rpc.o 00:01:25.479 CC lib/thread/thread.o 00:01:25.479 CC lib/thread/iobuf.o 00:01:25.739 LIB libspdk_sock.a 00:01:25.739 SO libspdk_sock.so.9.0 00:01:25.998 SYMLINK libspdk_sock.so 00:01:26.257 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:26.257 CC lib/nvme/nvme_ctrlr.o 00:01:26.257 CC lib/nvme/nvme_ns_cmd.o 00:01:26.257 CC lib/nvme/nvme_fabric.o 00:01:26.257 CC lib/nvme/nvme_ns.o 00:01:26.257 CC lib/nvme/nvme_pcie_common.o 00:01:26.257 CC lib/nvme/nvme_pcie.o 00:01:26.257 CC lib/nvme/nvme_qpair.o 00:01:26.257 CC lib/nvme/nvme.o 00:01:26.257 CC lib/nvme/nvme_quirks.o 00:01:26.257 CC lib/nvme/nvme_transport.o 00:01:26.257 CC lib/nvme/nvme_discovery.o 00:01:26.257 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:26.257 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:26.257 CC lib/nvme/nvme_tcp.o 00:01:26.257 CC lib/nvme/nvme_opal.o 00:01:26.257 CC lib/nvme/nvme_io_msg.o 00:01:26.257 CC lib/nvme/nvme_poll_group.o 00:01:26.257 CC lib/nvme/nvme_zns.o 00:01:26.257 CC lib/nvme/nvme_stubs.o 00:01:26.257 CC lib/nvme/nvme_auth.o 00:01:26.257 CC lib/nvme/nvme_cuse.o 00:01:26.257 CC lib/nvme/nvme_vfio_user.o 00:01:26.257 CC lib/nvme/nvme_rdma.o 00:01:26.517 LIB libspdk_thread.a 00:01:26.517 SO libspdk_thread.so.10.0 00:01:26.777 SYMLINK libspdk_thread.so 00:01:27.036 CC lib/blob/request.o 00:01:27.036 CC lib/blob/zeroes.o 00:01:27.036 CC lib/blob/blobstore.o 00:01:27.036 CC lib/blob/blob_bs_dev.o 00:01:27.036 CC lib/vfu_tgt/tgt_endpoint.o 00:01:27.036 CC lib/vfu_tgt/tgt_rpc.o 00:01:27.036 CC lib/accel/accel.o 00:01:27.036 CC lib/accel/accel_rpc.o 00:01:27.036 CC lib/accel/accel_sw.o 00:01:27.036 CC lib/virtio/virtio.o 00:01:27.036 CC lib/virtio/virtio_vhost_user.o 00:01:27.036 CC lib/init/json_config.o 00:01:27.036 CC lib/virtio/virtio_vfio_user.o 00:01:27.036 CC lib/init/subsystem.o 00:01:27.036 CC lib/virtio/virtio_pci.o 00:01:27.036 CC lib/init/subsystem_rpc.o 00:01:27.036 CC lib/init/rpc.o 00:01:27.295 LIB libspdk_init.a 00:01:27.295 LIB libspdk_virtio.a 00:01:27.295 SO libspdk_init.so.5.0 00:01:27.295 LIB libspdk_vfu_tgt.a 00:01:27.295 SO libspdk_virtio.so.7.0 00:01:27.295 SO libspdk_vfu_tgt.so.3.0 00:01:27.295 SYMLINK libspdk_init.so 00:01:27.295 SYMLINK libspdk_virtio.so 00:01:27.295 SYMLINK libspdk_vfu_tgt.so 00:01:27.555 CC lib/event/app.o 00:01:27.555 CC lib/event/reactor.o 00:01:27.555 CC lib/event/log_rpc.o 00:01:27.555 CC lib/event/app_rpc.o 00:01:27.555 CC lib/event/scheduler_static.o 00:01:27.815 LIB libspdk_accel.a 00:01:27.815 SO libspdk_accel.so.15.0 00:01:27.815 LIB libspdk_nvme.a 00:01:27.815 SYMLINK libspdk_accel.so 00:01:27.815 SO libspdk_nvme.so.13.0 00:01:28.075 LIB libspdk_event.a 00:01:28.075 SO libspdk_event.so.13.0 00:01:28.075 SYMLINK libspdk_event.so 00:01:28.075 SYMLINK libspdk_nvme.so 00:01:28.075 CC lib/bdev/bdev.o 00:01:28.075 CC lib/bdev/bdev_rpc.o 00:01:28.075 CC lib/bdev/bdev_zone.o 00:01:28.075 CC lib/bdev/part.o 00:01:28.075 CC lib/bdev/scsi_nvme.o 00:01:29.014 LIB libspdk_blob.a 00:01:29.014 SO libspdk_blob.so.11.0 00:01:29.014 SYMLINK libspdk_blob.so 00:01:29.582 CC lib/blobfs/blobfs.o 00:01:29.582 CC lib/blobfs/tree.o 00:01:29.582 CC lib/lvol/lvol.o 00:01:29.841 LIB libspdk_bdev.a 00:01:30.099 SO libspdk_bdev.so.15.0 00:01:30.099 LIB libspdk_blobfs.a 00:01:30.099 SO libspdk_blobfs.so.10.0 00:01:30.099 SYMLINK libspdk_bdev.so 00:01:30.099 LIB libspdk_lvol.a 00:01:30.099 SYMLINK libspdk_blobfs.so 00:01:30.099 SO libspdk_lvol.so.10.0 00:01:30.358 SYMLINK libspdk_lvol.so 00:01:30.358 CC lib/ftl/ftl_core.o 00:01:30.358 CC lib/ftl/ftl_layout.o 00:01:30.358 CC lib/ftl/ftl_init.o 00:01:30.358 CC lib/ftl/ftl_debug.o 00:01:30.358 CC lib/ftl/ftl_io.o 00:01:30.358 CC lib/ftl/ftl_sb.o 00:01:30.358 CC lib/ftl/ftl_nv_cache.o 00:01:30.358 CC lib/ftl/ftl_l2p.o 00:01:30.358 CC lib/ftl/ftl_l2p_flat.o 00:01:30.358 CC lib/ftl/ftl_band.o 00:01:30.358 CC lib/ftl/ftl_band_ops.o 00:01:30.358 CC lib/ftl/ftl_writer.o 00:01:30.358 CC lib/ftl/ftl_rq.o 00:01:30.358 CC lib/ftl/ftl_reloc.o 00:01:30.358 CC lib/ftl/ftl_l2p_cache.o 00:01:30.358 CC lib/ftl/ftl_p2l.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt.o 00:01:30.358 CC lib/nbd/nbd.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:30.358 CC lib/nbd/nbd_rpc.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:30.358 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:30.358 CC lib/ftl/utils/ftl_md.o 00:01:30.358 CC lib/ftl/utils/ftl_conf.o 00:01:30.358 CC lib/ftl/utils/ftl_mempool.o 00:01:30.358 CC lib/ftl/utils/ftl_bitmap.o 00:01:30.358 CC lib/ftl/utils/ftl_property.o 00:01:30.358 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:30.358 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:30.358 CC lib/scsi/dev.o 00:01:30.358 CC lib/scsi/lun.o 00:01:30.358 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:30.358 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:30.358 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:30.358 CC lib/scsi/port.o 00:01:30.358 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:30.358 CC lib/scsi/scsi.o 00:01:30.358 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:30.358 CC lib/scsi/scsi_bdev.o 00:01:30.358 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:30.358 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:30.358 CC lib/ftl/base/ftl_base_bdev.o 00:01:30.358 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:30.358 CC lib/scsi/scsi_pr.o 00:01:30.358 CC lib/ftl/base/ftl_base_dev.o 00:01:30.358 CC lib/ftl/ftl_trace.o 00:01:30.358 CC lib/nvmf/ctrlr_discovery.o 00:01:30.358 CC lib/scsi/scsi_rpc.o 00:01:30.358 CC lib/scsi/task.o 00:01:30.358 CC lib/ublk/ublk.o 00:01:30.358 CC lib/nvmf/ctrlr.o 00:01:30.358 CC lib/ublk/ublk_rpc.o 00:01:30.358 CC lib/nvmf/ctrlr_bdev.o 00:01:30.358 CC lib/nvmf/subsystem.o 00:01:30.358 CC lib/nvmf/nvmf_rpc.o 00:01:30.358 CC lib/nvmf/nvmf.o 00:01:30.358 CC lib/nvmf/transport.o 00:01:30.358 CC lib/nvmf/tcp.o 00:01:30.358 CC lib/nvmf/stubs.o 00:01:30.358 CC lib/nvmf/vfio_user.o 00:01:30.358 CC lib/nvmf/rdma.o 00:01:30.358 CC lib/nvmf/auth.o 00:01:30.925 LIB libspdk_nbd.a 00:01:30.925 SO libspdk_nbd.so.7.0 00:01:30.925 SYMLINK libspdk_nbd.so 00:01:31.184 LIB libspdk_scsi.a 00:01:31.184 SO libspdk_scsi.so.9.0 00:01:31.184 LIB libspdk_ublk.a 00:01:31.184 SYMLINK libspdk_scsi.so 00:01:31.184 LIB libspdk_ftl.a 00:01:31.184 SO libspdk_ublk.so.3.0 00:01:31.444 SYMLINK libspdk_ublk.so 00:01:31.444 SO libspdk_ftl.so.9.0 00:01:31.444 CC lib/vhost/vhost.o 00:01:31.444 CC lib/vhost/vhost_rpc.o 00:01:31.703 CC lib/vhost/vhost_scsi.o 00:01:31.703 CC lib/vhost/vhost_blk.o 00:01:31.703 CC lib/vhost/rte_vhost_user.o 00:01:31.703 CC lib/iscsi/conn.o 00:01:31.703 CC lib/iscsi/init_grp.o 00:01:31.703 CC lib/iscsi/iscsi.o 00:01:31.703 CC lib/iscsi/md5.o 00:01:31.703 CC lib/iscsi/param.o 00:01:31.703 CC lib/iscsi/iscsi_subsystem.o 00:01:31.703 CC lib/iscsi/portal_grp.o 00:01:31.703 CC lib/iscsi/tgt_node.o 00:01:31.703 CC lib/iscsi/iscsi_rpc.o 00:01:31.703 CC lib/iscsi/task.o 00:01:31.703 SYMLINK libspdk_ftl.so 00:01:32.272 LIB libspdk_nvmf.a 00:01:32.272 SO libspdk_nvmf.so.18.0 00:01:32.272 LIB libspdk_vhost.a 00:01:32.272 SYMLINK libspdk_nvmf.so 00:01:32.532 SO libspdk_vhost.so.8.0 00:01:32.532 SYMLINK libspdk_vhost.so 00:01:32.532 LIB libspdk_iscsi.a 00:01:32.532 SO libspdk_iscsi.so.8.0 00:01:32.793 SYMLINK libspdk_iscsi.so 00:01:33.363 CC module/vfu_device/vfu_virtio.o 00:01:33.363 CC module/vfu_device/vfu_virtio_scsi.o 00:01:33.363 CC module/vfu_device/vfu_virtio_blk.o 00:01:33.363 CC module/vfu_device/vfu_virtio_rpc.o 00:01:33.363 CC module/env_dpdk/env_dpdk_rpc.o 00:01:33.363 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:33.363 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:33.363 CC module/scheduler/gscheduler/gscheduler.o 00:01:33.363 CC module/accel/error/accel_error.o 00:01:33.363 CC module/accel/error/accel_error_rpc.o 00:01:33.363 CC module/blob/bdev/blob_bdev.o 00:01:33.622 LIB libspdk_env_dpdk_rpc.a 00:01:33.622 CC module/accel/iaa/accel_iaa.o 00:01:33.622 CC module/accel/iaa/accel_iaa_rpc.o 00:01:33.622 CC module/sock/posix/posix.o 00:01:33.622 CC module/keyring/file/keyring.o 00:01:33.622 CC module/accel/dsa/accel_dsa.o 00:01:33.622 CC module/keyring/file/keyring_rpc.o 00:01:33.622 CC module/accel/dsa/accel_dsa_rpc.o 00:01:33.622 CC module/accel/ioat/accel_ioat.o 00:01:33.622 CC module/accel/ioat/accel_ioat_rpc.o 00:01:33.622 SO libspdk_env_dpdk_rpc.so.6.0 00:01:33.622 SYMLINK libspdk_env_dpdk_rpc.so 00:01:33.622 LIB libspdk_scheduler_dpdk_governor.a 00:01:33.622 LIB libspdk_scheduler_gscheduler.a 00:01:33.622 LIB libspdk_keyring_file.a 00:01:33.622 LIB libspdk_accel_error.a 00:01:33.622 LIB libspdk_scheduler_dynamic.a 00:01:33.622 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:33.622 SO libspdk_scheduler_gscheduler.so.4.0 00:01:33.622 SO libspdk_keyring_file.so.1.0 00:01:33.622 SO libspdk_scheduler_dynamic.so.4.0 00:01:33.622 LIB libspdk_accel_ioat.a 00:01:33.622 SO libspdk_accel_error.so.2.0 00:01:33.622 LIB libspdk_accel_iaa.a 00:01:33.622 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:33.622 LIB libspdk_accel_dsa.a 00:01:33.622 SO libspdk_accel_ioat.so.6.0 00:01:33.622 SYMLINK libspdk_keyring_file.so 00:01:33.622 SYMLINK libspdk_scheduler_gscheduler.so 00:01:33.622 LIB libspdk_blob_bdev.a 00:01:33.623 SO libspdk_accel_iaa.so.3.0 00:01:33.882 SO libspdk_accel_dsa.so.5.0 00:01:33.882 SO libspdk_blob_bdev.so.11.0 00:01:33.882 SYMLINK libspdk_scheduler_dynamic.so 00:01:33.882 SYMLINK libspdk_accel_error.so 00:01:33.882 SYMLINK libspdk_accel_ioat.so 00:01:33.882 LIB libspdk_vfu_device.a 00:01:33.883 SYMLINK libspdk_accel_iaa.so 00:01:33.883 SYMLINK libspdk_blob_bdev.so 00:01:33.883 SYMLINK libspdk_accel_dsa.so 00:01:33.883 SO libspdk_vfu_device.so.3.0 00:01:33.883 SYMLINK libspdk_vfu_device.so 00:01:34.141 LIB libspdk_sock_posix.a 00:01:34.141 SO libspdk_sock_posix.so.6.0 00:01:34.141 SYMLINK libspdk_sock_posix.so 00:01:34.401 CC module/bdev/delay/vbdev_delay.o 00:01:34.401 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:34.401 CC module/bdev/error/vbdev_error.o 00:01:34.401 CC module/bdev/error/vbdev_error_rpc.o 00:01:34.401 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:34.401 CC module/bdev/malloc/bdev_malloc.o 00:01:34.401 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:34.401 CC module/bdev/lvol/vbdev_lvol.o 00:01:34.401 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:34.401 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:34.401 CC module/bdev/iscsi/bdev_iscsi.o 00:01:34.401 CC module/bdev/nvme/bdev_nvme.o 00:01:34.401 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:34.401 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:34.401 CC module/bdev/nvme/vbdev_opal.o 00:01:34.401 CC module/bdev/gpt/gpt.o 00:01:34.401 CC module/bdev/nvme/nvme_rpc.o 00:01:34.401 CC module/bdev/nvme/bdev_mdns_client.o 00:01:34.401 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:34.401 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:34.401 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:34.401 CC module/bdev/gpt/vbdev_gpt.o 00:01:34.401 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:34.401 CC module/bdev/split/vbdev_split_rpc.o 00:01:34.401 CC module/bdev/split/vbdev_split.o 00:01:34.401 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:34.401 CC module/bdev/raid/bdev_raid.o 00:01:34.401 CC module/bdev/raid/bdev_raid_rpc.o 00:01:34.401 CC module/bdev/raid/bdev_raid_sb.o 00:01:34.401 CC module/bdev/raid/raid0.o 00:01:34.401 CC module/bdev/passthru/vbdev_passthru.o 00:01:34.401 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:34.401 CC module/bdev/raid/raid1.o 00:01:34.401 CC module/bdev/raid/concat.o 00:01:34.401 CC module/bdev/null/bdev_null.o 00:01:34.401 CC module/bdev/null/bdev_null_rpc.o 00:01:34.401 CC module/bdev/aio/bdev_aio.o 00:01:34.401 CC module/bdev/aio/bdev_aio_rpc.o 00:01:34.401 CC module/bdev/ftl/bdev_ftl.o 00:01:34.401 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:34.401 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:34.401 CC module/blobfs/bdev/blobfs_bdev.o 00:01:34.660 LIB libspdk_blobfs_bdev.a 00:01:34.660 LIB libspdk_bdev_split.a 00:01:34.660 LIB libspdk_bdev_error.a 00:01:34.660 SO libspdk_blobfs_bdev.so.6.0 00:01:34.660 LIB libspdk_bdev_null.a 00:01:34.660 LIB libspdk_bdev_zone_block.a 00:01:34.660 SO libspdk_bdev_error.so.6.0 00:01:34.660 LIB libspdk_bdev_gpt.a 00:01:34.660 SO libspdk_bdev_split.so.6.0 00:01:34.660 LIB libspdk_bdev_ftl.a 00:01:34.660 LIB libspdk_bdev_passthru.a 00:01:34.660 LIB libspdk_bdev_delay.a 00:01:34.660 SO libspdk_bdev_zone_block.so.6.0 00:01:34.660 SO libspdk_bdev_null.so.6.0 00:01:34.660 LIB libspdk_bdev_iscsi.a 00:01:34.660 LIB libspdk_bdev_malloc.a 00:01:34.660 SYMLINK libspdk_blobfs_bdev.so 00:01:34.660 SO libspdk_bdev_gpt.so.6.0 00:01:34.660 LIB libspdk_bdev_aio.a 00:01:34.660 SO libspdk_bdev_ftl.so.6.0 00:01:34.660 SYMLINK libspdk_bdev_error.so 00:01:34.660 SO libspdk_bdev_malloc.so.6.0 00:01:34.660 SO libspdk_bdev_passthru.so.6.0 00:01:34.660 SO libspdk_bdev_delay.so.6.0 00:01:34.660 SYMLINK libspdk_bdev_split.so 00:01:34.660 SO libspdk_bdev_iscsi.so.6.0 00:01:34.660 SO libspdk_bdev_aio.so.6.0 00:01:34.660 SYMLINK libspdk_bdev_zone_block.so 00:01:34.660 SYMLINK libspdk_bdev_gpt.so 00:01:34.660 SYMLINK libspdk_bdev_null.so 00:01:34.660 SYMLINK libspdk_bdev_ftl.so 00:01:34.660 SYMLINK libspdk_bdev_delay.so 00:01:34.660 SYMLINK libspdk_bdev_malloc.so 00:01:34.660 SYMLINK libspdk_bdev_passthru.so 00:01:34.919 SYMLINK libspdk_bdev_iscsi.so 00:01:34.919 LIB libspdk_bdev_lvol.a 00:01:34.919 SYMLINK libspdk_bdev_aio.so 00:01:34.919 LIB libspdk_bdev_virtio.a 00:01:34.919 SO libspdk_bdev_lvol.so.6.0 00:01:34.919 SO libspdk_bdev_virtio.so.6.0 00:01:34.919 SYMLINK libspdk_bdev_lvol.so 00:01:34.919 SYMLINK libspdk_bdev_virtio.so 00:01:35.179 LIB libspdk_bdev_raid.a 00:01:35.179 SO libspdk_bdev_raid.so.6.0 00:01:35.179 SYMLINK libspdk_bdev_raid.so 00:01:36.146 LIB libspdk_bdev_nvme.a 00:01:36.146 SO libspdk_bdev_nvme.so.7.0 00:01:36.146 SYMLINK libspdk_bdev_nvme.so 00:01:36.714 CC module/event/subsystems/vmd/vmd.o 00:01:36.714 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:36.714 CC module/event/subsystems/iobuf/iobuf.o 00:01:36.714 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:36.714 CC module/event/subsystems/keyring/keyring.o 00:01:36.715 CC module/event/subsystems/sock/sock.o 00:01:36.715 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:36.715 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:36.715 CC module/event/subsystems/scheduler/scheduler.o 00:01:36.974 LIB libspdk_event_vfu_tgt.a 00:01:36.974 LIB libspdk_event_vmd.a 00:01:36.974 LIB libspdk_event_keyring.a 00:01:36.974 LIB libspdk_event_vhost_blk.a 00:01:36.974 LIB libspdk_event_sock.a 00:01:36.974 LIB libspdk_event_iobuf.a 00:01:36.974 SO libspdk_event_vmd.so.6.0 00:01:36.974 LIB libspdk_event_scheduler.a 00:01:36.974 SO libspdk_event_vfu_tgt.so.3.0 00:01:36.974 SO libspdk_event_keyring.so.1.0 00:01:36.974 SO libspdk_event_sock.so.5.0 00:01:36.974 SO libspdk_event_vhost_blk.so.3.0 00:01:36.974 SO libspdk_event_iobuf.so.3.0 00:01:36.974 SO libspdk_event_scheduler.so.4.0 00:01:36.974 SYMLINK libspdk_event_keyring.so 00:01:36.974 SYMLINK libspdk_event_vfu_tgt.so 00:01:36.974 SYMLINK libspdk_event_vmd.so 00:01:36.974 SYMLINK libspdk_event_vhost_blk.so 00:01:36.974 SYMLINK libspdk_event_sock.so 00:01:36.974 SYMLINK libspdk_event_iobuf.so 00:01:36.974 SYMLINK libspdk_event_scheduler.so 00:01:37.543 CC module/event/subsystems/accel/accel.o 00:01:37.543 LIB libspdk_event_accel.a 00:01:37.543 SO libspdk_event_accel.so.6.0 00:01:37.543 SYMLINK libspdk_event_accel.so 00:01:38.112 CC module/event/subsystems/bdev/bdev.o 00:01:38.112 LIB libspdk_event_bdev.a 00:01:38.112 SO libspdk_event_bdev.so.6.0 00:01:38.371 SYMLINK libspdk_event_bdev.so 00:01:38.630 CC module/event/subsystems/nbd/nbd.o 00:01:38.630 CC module/event/subsystems/scsi/scsi.o 00:01:38.630 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:38.630 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:38.630 CC module/event/subsystems/ublk/ublk.o 00:01:38.630 LIB libspdk_event_nbd.a 00:01:38.630 LIB libspdk_event_scsi.a 00:01:38.889 LIB libspdk_event_ublk.a 00:01:38.889 SO libspdk_event_scsi.so.6.0 00:01:38.889 LIB libspdk_event_nvmf.a 00:01:38.889 SO libspdk_event_nbd.so.6.0 00:01:38.889 SO libspdk_event_ublk.so.3.0 00:01:38.889 SO libspdk_event_nvmf.so.6.0 00:01:38.889 SYMLINK libspdk_event_scsi.so 00:01:38.889 SYMLINK libspdk_event_nbd.so 00:01:38.889 SYMLINK libspdk_event_ublk.so 00:01:38.889 SYMLINK libspdk_event_nvmf.so 00:01:39.148 CC module/event/subsystems/iscsi/iscsi.o 00:01:39.148 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:39.407 LIB libspdk_event_iscsi.a 00:01:39.407 LIB libspdk_event_vhost_scsi.a 00:01:39.407 SO libspdk_event_iscsi.so.6.0 00:01:39.407 SO libspdk_event_vhost_scsi.so.3.0 00:01:39.407 SYMLINK libspdk_event_iscsi.so 00:01:39.407 SYMLINK libspdk_event_vhost_scsi.so 00:01:39.666 SO libspdk.so.6.0 00:01:39.666 SYMLINK libspdk.so 00:01:39.924 CXX app/trace/trace.o 00:01:39.924 CC app/trace_record/trace_record.o 00:01:39.924 CC app/spdk_nvme_discover/discovery_aer.o 00:01:39.924 TEST_HEADER include/spdk/accel.h 00:01:39.924 TEST_HEADER include/spdk/accel_module.h 00:01:39.924 TEST_HEADER include/spdk/assert.h 00:01:39.924 CC app/spdk_lspci/spdk_lspci.o 00:01:39.924 TEST_HEADER include/spdk/barrier.h 00:01:39.924 CC test/rpc_client/rpc_client_test.o 00:01:39.924 TEST_HEADER include/spdk/base64.h 00:01:39.924 TEST_HEADER include/spdk/bdev.h 00:01:39.924 TEST_HEADER include/spdk/bdev_zone.h 00:01:39.924 TEST_HEADER include/spdk/bit_array.h 00:01:39.924 TEST_HEADER include/spdk/bdev_module.h 00:01:39.924 CC app/spdk_nvme_identify/identify.o 00:01:39.924 TEST_HEADER include/spdk/blob_bdev.h 00:01:39.924 TEST_HEADER include/spdk/bit_pool.h 00:01:39.924 CC app/spdk_nvme_perf/perf.o 00:01:39.924 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:39.924 TEST_HEADER include/spdk/blobfs.h 00:01:39.924 TEST_HEADER include/spdk/blob.h 00:01:39.924 TEST_HEADER include/spdk/conf.h 00:01:39.924 CC app/spdk_top/spdk_top.o 00:01:39.924 TEST_HEADER include/spdk/config.h 00:01:39.924 TEST_HEADER include/spdk/cpuset.h 00:01:39.924 TEST_HEADER include/spdk/crc16.h 00:01:39.924 TEST_HEADER include/spdk/crc32.h 00:01:39.924 TEST_HEADER include/spdk/crc64.h 00:01:39.925 TEST_HEADER include/spdk/dif.h 00:01:39.925 TEST_HEADER include/spdk/endian.h 00:01:39.925 TEST_HEADER include/spdk/dma.h 00:01:39.925 TEST_HEADER include/spdk/env.h 00:01:39.925 TEST_HEADER include/spdk/env_dpdk.h 00:01:39.925 TEST_HEADER include/spdk/fd_group.h 00:01:39.925 TEST_HEADER include/spdk/event.h 00:01:39.925 TEST_HEADER include/spdk/fd.h 00:01:39.925 TEST_HEADER include/spdk/file.h 00:01:39.925 TEST_HEADER include/spdk/gpt_spec.h 00:01:39.925 TEST_HEADER include/spdk/hexlify.h 00:01:39.925 TEST_HEADER include/spdk/ftl.h 00:01:40.189 TEST_HEADER include/spdk/histogram_data.h 00:01:40.189 TEST_HEADER include/spdk/idxd.h 00:01:40.189 TEST_HEADER include/spdk/idxd_spec.h 00:01:40.189 TEST_HEADER include/spdk/init.h 00:01:40.189 TEST_HEADER include/spdk/ioat.h 00:01:40.189 TEST_HEADER include/spdk/ioat_spec.h 00:01:40.189 TEST_HEADER include/spdk/iscsi_spec.h 00:01:40.189 TEST_HEADER include/spdk/json.h 00:01:40.189 TEST_HEADER include/spdk/jsonrpc.h 00:01:40.189 TEST_HEADER include/spdk/keyring_module.h 00:01:40.189 TEST_HEADER include/spdk/keyring.h 00:01:40.189 TEST_HEADER include/spdk/likely.h 00:01:40.189 TEST_HEADER include/spdk/log.h 00:01:40.189 TEST_HEADER include/spdk/lvol.h 00:01:40.189 TEST_HEADER include/spdk/memory.h 00:01:40.189 TEST_HEADER include/spdk/nbd.h 00:01:40.189 TEST_HEADER include/spdk/mmio.h 00:01:40.189 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:40.189 TEST_HEADER include/spdk/notify.h 00:01:40.189 CC app/spdk_dd/spdk_dd.o 00:01:40.189 TEST_HEADER include/spdk/nvme.h 00:01:40.189 TEST_HEADER include/spdk/nvme_intel.h 00:01:40.189 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:40.189 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:40.189 TEST_HEADER include/spdk/nvme_spec.h 00:01:40.189 TEST_HEADER include/spdk/nvme_zns.h 00:01:40.189 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:40.189 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:40.189 TEST_HEADER include/spdk/nvmf.h 00:01:40.189 TEST_HEADER include/spdk/nvmf_spec.h 00:01:40.189 TEST_HEADER include/spdk/nvmf_transport.h 00:01:40.189 TEST_HEADER include/spdk/opal.h 00:01:40.189 CC app/nvmf_tgt/nvmf_main.o 00:01:40.189 TEST_HEADER include/spdk/opal_spec.h 00:01:40.189 TEST_HEADER include/spdk/pci_ids.h 00:01:40.189 TEST_HEADER include/spdk/queue.h 00:01:40.189 TEST_HEADER include/spdk/pipe.h 00:01:40.189 CC app/vhost/vhost.o 00:01:40.189 TEST_HEADER include/spdk/reduce.h 00:01:40.189 TEST_HEADER include/spdk/rpc.h 00:01:40.189 TEST_HEADER include/spdk/scsi.h 00:01:40.189 TEST_HEADER include/spdk/scheduler.h 00:01:40.189 TEST_HEADER include/spdk/scsi_spec.h 00:01:40.189 TEST_HEADER include/spdk/sock.h 00:01:40.189 CC app/spdk_tgt/spdk_tgt.o 00:01:40.189 TEST_HEADER include/spdk/stdinc.h 00:01:40.189 TEST_HEADER include/spdk/string.h 00:01:40.189 TEST_HEADER include/spdk/trace.h 00:01:40.189 TEST_HEADER include/spdk/thread.h 00:01:40.189 TEST_HEADER include/spdk/trace_parser.h 00:01:40.189 TEST_HEADER include/spdk/tree.h 00:01:40.189 TEST_HEADER include/spdk/ublk.h 00:01:40.189 TEST_HEADER include/spdk/util.h 00:01:40.189 TEST_HEADER include/spdk/uuid.h 00:01:40.189 TEST_HEADER include/spdk/version.h 00:01:40.189 CC app/iscsi_tgt/iscsi_tgt.o 00:01:40.189 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:40.189 TEST_HEADER include/spdk/vhost.h 00:01:40.189 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:40.189 TEST_HEADER include/spdk/vmd.h 00:01:40.189 TEST_HEADER include/spdk/xor.h 00:01:40.189 TEST_HEADER include/spdk/zipf.h 00:01:40.189 CXX test/cpp_headers/accel.o 00:01:40.189 CXX test/cpp_headers/accel_module.o 00:01:40.189 CXX test/cpp_headers/assert.o 00:01:40.189 CXX test/cpp_headers/barrier.o 00:01:40.189 CXX test/cpp_headers/base64.o 00:01:40.189 CXX test/cpp_headers/bdev.o 00:01:40.189 CXX test/cpp_headers/bdev_module.o 00:01:40.189 CXX test/cpp_headers/bdev_zone.o 00:01:40.189 CXX test/cpp_headers/bit_array.o 00:01:40.189 CXX test/cpp_headers/bit_pool.o 00:01:40.189 CXX test/cpp_headers/blob_bdev.o 00:01:40.189 CXX test/cpp_headers/blobfs.o 00:01:40.189 CXX test/cpp_headers/blobfs_bdev.o 00:01:40.189 CXX test/cpp_headers/blob.o 00:01:40.189 CXX test/cpp_headers/config.o 00:01:40.189 CXX test/cpp_headers/conf.o 00:01:40.189 CXX test/cpp_headers/cpuset.o 00:01:40.189 CXX test/cpp_headers/crc16.o 00:01:40.189 CXX test/cpp_headers/crc32.o 00:01:40.189 CXX test/cpp_headers/crc64.o 00:01:40.189 CXX test/cpp_headers/dif.o 00:01:40.189 CXX test/cpp_headers/dma.o 00:01:40.189 CXX test/cpp_headers/endian.o 00:01:40.189 CXX test/cpp_headers/env.o 00:01:40.189 CXX test/cpp_headers/env_dpdk.o 00:01:40.189 CXX test/cpp_headers/event.o 00:01:40.189 CXX test/cpp_headers/fd_group.o 00:01:40.189 CXX test/cpp_headers/fd.o 00:01:40.189 CXX test/cpp_headers/file.o 00:01:40.189 CXX test/cpp_headers/ftl.o 00:01:40.189 CXX test/cpp_headers/gpt_spec.o 00:01:40.189 CXX test/cpp_headers/histogram_data.o 00:01:40.189 CXX test/cpp_headers/hexlify.o 00:01:40.189 CXX test/cpp_headers/idxd.o 00:01:40.189 CXX test/cpp_headers/idxd_spec.o 00:01:40.189 CXX test/cpp_headers/init.o 00:01:40.189 CXX test/cpp_headers/ioat.o 00:01:40.189 CXX test/cpp_headers/ioat_spec.o 00:01:40.189 CC test/thread/poller_perf/poller_perf.o 00:01:40.189 CC test/app/histogram_perf/histogram_perf.o 00:01:40.189 CC test/nvme/overhead/overhead.o 00:01:40.189 CC test/app/stub/stub.o 00:01:40.189 CC test/nvme/simple_copy/simple_copy.o 00:01:40.189 CC test/nvme/compliance/nvme_compliance.o 00:01:40.189 CC test/nvme/aer/aer.o 00:01:40.189 CC examples/accel/perf/accel_perf.o 00:01:40.189 CC test/nvme/reset/reset.o 00:01:40.189 CC test/app/jsoncat/jsoncat.o 00:01:40.189 CC test/nvme/startup/startup.o 00:01:40.189 CC test/nvme/e2edp/nvme_dp.o 00:01:40.189 CC examples/nvme/hello_world/hello_world.o 00:01:40.189 CC test/nvme/fused_ordering/fused_ordering.o 00:01:40.189 CC test/env/memory/memory_ut.o 00:01:40.189 CC test/nvme/reserve/reserve.o 00:01:40.189 CC test/nvme/connect_stress/connect_stress.o 00:01:40.189 CC examples/vmd/lsvmd/lsvmd.o 00:01:40.189 CC test/nvme/sgl/sgl.o 00:01:40.189 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:40.189 CC app/fio/nvme/fio_plugin.o 00:01:40.189 CC test/nvme/err_injection/err_injection.o 00:01:40.189 CC examples/sock/hello_world/hello_sock.o 00:01:40.189 CC test/event/event_perf/event_perf.o 00:01:40.189 CC test/nvme/boot_partition/boot_partition.o 00:01:40.189 CC test/nvme/fdp/fdp.o 00:01:40.189 CC test/env/pci/pci_ut.o 00:01:40.454 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:40.454 CC test/env/vtophys/vtophys.o 00:01:40.454 CC test/blobfs/mkfs/mkfs.o 00:01:40.454 CC examples/vmd/led/led.o 00:01:40.454 CC test/event/reactor_perf/reactor_perf.o 00:01:40.454 CC examples/nvme/abort/abort.o 00:01:40.454 CC examples/nvme/arbitration/arbitration.o 00:01:40.454 CC examples/nvme/hotplug/hotplug.o 00:01:40.454 CC examples/nvme/reconnect/reconnect.o 00:01:40.454 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:40.454 CC examples/ioat/verify/verify.o 00:01:40.454 CC test/nvme/cuse/cuse.o 00:01:40.454 CC examples/ioat/perf/perf.o 00:01:40.454 CC examples/util/zipf/zipf.o 00:01:40.454 CC examples/idxd/perf/perf.o 00:01:40.454 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:40.454 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:40.454 CC test/dma/test_dma/test_dma.o 00:01:40.454 CC test/event/reactor/reactor.o 00:01:40.454 CC test/bdev/bdevio/bdevio.o 00:01:40.454 CC test/event/app_repeat/app_repeat.o 00:01:40.454 CC test/app/bdev_svc/bdev_svc.o 00:01:40.454 CC examples/blob/hello_world/hello_blob.o 00:01:40.454 CC test/accel/dif/dif.o 00:01:40.454 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.454 CC examples/blob/cli/blobcli.o 00:01:40.454 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.454 CC test/event/scheduler/scheduler.o 00:01:40.454 CC examples/thread/thread/thread_ex.o 00:01:40.454 CC examples/nvmf/nvmf/nvmf.o 00:01:40.454 CC app/fio/bdev/fio_plugin.o 00:01:40.454 LINK spdk_lspci 00:01:40.723 LINK rpc_client_test 00:01:40.723 LINK spdk_nvme_discover 00:01:40.723 LINK nvmf_tgt 00:01:40.723 LINK interrupt_tgt 00:01:40.723 CC test/lvol/esnap/esnap.o 00:01:40.723 LINK vhost 00:01:40.723 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:40.723 CC test/env/mem_callbacks/mem_callbacks.o 00:01:40.723 LINK event_perf 00:01:40.723 LINK led 00:01:40.723 LINK poller_perf 00:01:40.723 LINK reactor_perf 00:01:40.723 LINK lsvmd 00:01:40.723 LINK iscsi_tgt 00:01:40.723 LINK spdk_tgt 00:01:40.981 LINK jsoncat 00:01:40.981 LINK spdk_trace_record 00:01:40.981 LINK histogram_perf 00:01:40.981 CXX test/cpp_headers/iscsi_spec.o 00:01:40.981 LINK boot_partition 00:01:40.981 LINK zipf 00:01:40.981 CXX test/cpp_headers/json.o 00:01:40.981 LINK stub 00:01:40.981 LINK connect_stress 00:01:40.981 LINK startup 00:01:40.981 LINK pmr_persistence 00:01:40.981 CXX test/cpp_headers/jsonrpc.o 00:01:40.981 LINK vtophys 00:01:40.981 LINK reactor 00:01:40.981 CXX test/cpp_headers/keyring.o 00:01:40.981 CXX test/cpp_headers/keyring_module.o 00:01:40.981 CXX test/cpp_headers/likely.o 00:01:40.981 LINK reserve 00:01:40.981 CXX test/cpp_headers/log.o 00:01:40.981 CXX test/cpp_headers/lvol.o 00:01:40.981 LINK err_injection 00:01:40.981 CXX test/cpp_headers/memory.o 00:01:40.981 CXX test/cpp_headers/mmio.o 00:01:40.981 CXX test/cpp_headers/nbd.o 00:01:40.981 CXX test/cpp_headers/notify.o 00:01:40.981 LINK app_repeat 00:01:40.981 CXX test/cpp_headers/nvme.o 00:01:40.981 CXX test/cpp_headers/nvme_intel.o 00:01:40.981 CXX test/cpp_headers/nvme_ocssd.o 00:01:40.981 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:40.981 LINK doorbell_aers 00:01:40.981 LINK env_dpdk_post_init 00:01:40.981 CXX test/cpp_headers/nvme_zns.o 00:01:40.981 CXX test/cpp_headers/nvme_spec.o 00:01:40.981 CXX test/cpp_headers/nvmf_cmd.o 00:01:40.981 LINK fused_ordering 00:01:40.981 CXX test/cpp_headers/nvmf_spec.o 00:01:40.981 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:40.981 CXX test/cpp_headers/nvmf.o 00:01:40.981 CXX test/cpp_headers/nvmf_transport.o 00:01:40.981 CXX test/cpp_headers/opal_spec.o 00:01:40.981 CXX test/cpp_headers/opal.o 00:01:40.981 CXX test/cpp_headers/pci_ids.o 00:01:40.981 LINK mkfs 00:01:40.981 CXX test/cpp_headers/pipe.o 00:01:40.981 LINK cmb_copy 00:01:40.981 CXX test/cpp_headers/queue.o 00:01:40.981 CXX test/cpp_headers/reduce.o 00:01:40.981 LINK verify 00:01:40.981 CXX test/cpp_headers/rpc.o 00:01:40.981 CXX test/cpp_headers/scheduler.o 00:01:40.982 CXX test/cpp_headers/scsi.o 00:01:40.982 CXX test/cpp_headers/scsi_spec.o 00:01:40.982 CXX test/cpp_headers/sock.o 00:01:40.982 CXX test/cpp_headers/stdinc.o 00:01:40.982 LINK simple_copy 00:01:40.982 LINK ioat_perf 00:01:40.982 LINK hello_sock 00:01:40.982 LINK bdev_svc 00:01:40.982 LINK hello_world 00:01:40.982 CXX test/cpp_headers/string.o 00:01:40.982 LINK reset 00:01:40.982 LINK scheduler 00:01:40.982 CXX test/cpp_headers/thread.o 00:01:40.982 LINK overhead 00:01:40.982 CXX test/cpp_headers/trace.o 00:01:40.982 LINK hotplug 00:01:40.982 LINK sgl 00:01:40.982 CXX test/cpp_headers/trace_parser.o 00:01:40.982 LINK thread 00:01:40.982 LINK hello_bdev 00:01:40.982 LINK nvme_dp 00:01:40.982 LINK hello_blob 00:01:40.982 LINK aer 00:01:41.241 CXX test/cpp_headers/tree.o 00:01:41.241 LINK spdk_dd 00:01:41.241 LINK nvme_compliance 00:01:41.241 CXX test/cpp_headers/ublk.o 00:01:41.241 CXX test/cpp_headers/util.o 00:01:41.241 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:41.241 LINK reconnect 00:01:41.241 LINK idxd_perf 00:01:41.241 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:41.241 CXX test/cpp_headers/uuid.o 00:01:41.241 LINK fdp 00:01:41.241 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:41.241 CXX test/cpp_headers/version.o 00:01:41.241 LINK nvmf 00:01:41.241 LINK arbitration 00:01:41.241 CXX test/cpp_headers/vfio_user_pci.o 00:01:41.241 CXX test/cpp_headers/vfio_user_spec.o 00:01:41.241 CXX test/cpp_headers/vhost.o 00:01:41.241 LINK abort 00:01:41.241 CXX test/cpp_headers/vmd.o 00:01:41.241 CXX test/cpp_headers/xor.o 00:01:41.241 CXX test/cpp_headers/zipf.o 00:01:41.241 LINK spdk_trace 00:01:41.241 LINK test_dma 00:01:41.241 LINK pci_ut 00:01:41.241 LINK dif 00:01:41.241 LINK bdevio 00:01:41.499 LINK accel_perf 00:01:41.499 LINK spdk_nvme 00:01:41.499 LINK blobcli 00:01:41.499 LINK spdk_bdev 00:01:41.499 LINK nvme_manage 00:01:41.499 LINK nvme_fuzz 00:01:41.758 LINK spdk_nvme_perf 00:01:41.758 LINK spdk_top 00:01:41.758 LINK spdk_nvme_identify 00:01:41.758 LINK vhost_fuzz 00:01:41.758 LINK mem_callbacks 00:01:41.758 LINK bdevperf 00:01:41.758 LINK memory_ut 00:01:41.758 LINK cuse 00:01:42.695 LINK iscsi_fuzz 00:01:44.600 LINK esnap 00:01:44.600 00:01:44.600 real 0m47.338s 00:01:44.600 user 6m33.598s 00:01:44.600 sys 4m19.126s 00:01:44.600 23:43:45 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:44.600 23:43:45 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.600 ************************************ 00:01:44.600 END TEST make 00:01:44.600 ************************************ 00:01:44.600 23:43:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.600 23:43:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.600 23:43:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.600 23:43:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.600 23:43:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.600 23:43:45 -- pm/common@44 -- $ pid=3283201 00:01:44.600 23:43:45 -- pm/common@50 -- $ kill -TERM 3283201 00:01:44.600 23:43:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.600 23:43:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.600 23:43:45 -- pm/common@44 -- $ pid=3283203 00:01:44.600 23:43:45 -- pm/common@50 -- $ kill -TERM 3283203 00:01:44.600 23:43:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.600 23:43:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.600 23:43:45 -- pm/common@44 -- $ pid=3283205 00:01:44.600 23:43:45 -- pm/common@50 -- $ kill -TERM 3283205 00:01:44.600 23:43:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.600 23:43:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.600 23:43:45 -- pm/common@44 -- $ pid=3283235 00:01:44.600 23:43:45 -- pm/common@50 -- $ sudo -E kill -TERM 3283235 00:01:44.860 23:43:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.860 23:43:45 -- nvmf/common.sh@7 -- # uname -s 00:01:44.860 23:43:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.860 23:43:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.860 23:43:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.860 23:43:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.860 23:43:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.860 23:43:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.860 23:43:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.860 23:43:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.860 23:43:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.860 23:43:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.860 23:43:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:01:44.860 23:43:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:01:44.860 23:43:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.860 23:43:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.860 23:43:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.860 23:43:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.860 23:43:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.860 23:43:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.860 23:43:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.860 23:43:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.860 23:43:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.860 23:43:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.860 23:43:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.860 23:43:45 -- paths/export.sh@5 -- # export PATH 00:01:44.860 23:43:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.860 23:43:45 -- nvmf/common.sh@47 -- # : 0 00:01:44.860 23:43:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.860 23:43:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.860 23:43:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.860 23:43:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.860 23:43:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.860 23:43:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.860 23:43:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.860 23:43:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.860 23:43:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.860 23:43:45 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.860 23:43:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.860 23:43:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.860 23:43:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.860 23:43:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.860 23:43:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.860 23:43:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.860 23:43:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.860 23:43:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.860 23:43:45 -- spdk/autotest.sh@48 -- # udevadm_pid=3343191 00:01:44.860 23:43:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.860 23:43:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.860 23:43:45 -- pm/common@17 -- # local monitor 00:01:44.860 23:43:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.860 23:43:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.860 23:43:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.860 23:43:45 -- pm/common@21 -- # date +%s 00:01:44.860 23:43:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.860 23:43:45 -- pm/common@21 -- # date +%s 00:01:44.860 23:43:45 -- pm/common@25 -- # sleep 1 00:01:44.860 23:43:45 -- pm/common@21 -- # date +%s 00:01:44.860 23:43:45 -- pm/common@21 -- # date +%s 00:01:44.860 23:43:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723025 00:01:44.860 23:43:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723025 00:01:44.860 23:43:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723025 00:01:44.860 23:43:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715723025 00:01:44.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723025_collect-vmstat.pm.log 00:01:44.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723025_collect-cpu-load.pm.log 00:01:44.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723025_collect-cpu-temp.pm.log 00:01:44.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715723025_collect-bmc-pm.bmc.pm.log 00:01:45.799 23:43:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.800 23:43:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.800 23:43:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:45.800 23:43:46 -- common/autotest_common.sh@10 -- # set +x 00:01:45.800 23:43:46 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.800 23:43:46 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:45.800 23:43:46 -- common/autotest_common.sh@10 -- # set +x 00:01:45.800 23:43:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.800 23:43:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.800 23:43:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.800 23:43:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.800 23:43:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.800 23:43:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.800 23:43:46 -- common/autotest_common.sh@1451 -- # uname 00:01:45.800 23:43:46 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:45.800 23:43:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.800 23:43:46 -- common/autotest_common.sh@1471 -- # uname 00:01:45.800 23:43:46 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:45.800 23:43:46 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.800 23:43:46 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.800 23:43:46 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.800 23:43:46 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.800 23:43:46 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.800 --rc lcov_branch_coverage=1 00:01:45.800 --rc lcov_function_coverage=1 00:01:45.800 --rc genhtml_branch_coverage=1 00:01:45.800 --rc genhtml_function_coverage=1 00:01:45.800 --rc genhtml_legend=1 00:01:45.800 --rc geninfo_all_blocks=1 00:01:45.800 ' 00:01:45.800 23:43:46 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.800 --rc lcov_branch_coverage=1 00:01:45.800 --rc lcov_function_coverage=1 00:01:45.800 --rc genhtml_branch_coverage=1 00:01:45.800 --rc genhtml_function_coverage=1 00:01:45.800 --rc genhtml_legend=1 00:01:45.800 --rc geninfo_all_blocks=1 00:01:45.800 ' 00:01:45.800 23:43:46 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.800 --rc lcov_branch_coverage=1 00:01:45.800 --rc lcov_function_coverage=1 00:01:45.800 --rc genhtml_branch_coverage=1 00:01:45.800 --rc genhtml_function_coverage=1 00:01:45.800 --rc genhtml_legend=1 00:01:45.800 --rc geninfo_all_blocks=1 00:01:45.800 --no-external' 00:01:45.800 23:43:46 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.800 --rc lcov_branch_coverage=1 00:01:45.800 --rc lcov_function_coverage=1 00:01:45.800 --rc genhtml_branch_coverage=1 00:01:45.800 --rc genhtml_function_coverage=1 00:01:45.800 --rc genhtml_legend=1 00:01:45.800 --rc geninfo_all_blocks=1 00:01:45.800 --no-external' 00:01:45.800 23:43:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:46.059 lcov: LCOV version 1.14 00:01:46.059 23:43:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:56.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:56.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:56.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:56.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:56.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:56.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:56.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:56.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:08.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:08.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:08.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:08.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:08.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:08.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:08.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:08.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:08.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:08.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:08.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:08.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:08.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:08.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:09.078 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:10.456 23:44:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:10.456 23:44:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:10.456 23:44:10 -- common/autotest_common.sh@10 -- # set +x 00:02:10.456 23:44:10 -- spdk/autotest.sh@91 -- # rm -f 00:02:10.456 23:44:10 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:13.745 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:13.745 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:13.746 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:13.746 23:44:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:13.746 23:44:14 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:13.746 23:44:14 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:13.746 23:44:14 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:13.746 23:44:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:13.746 23:44:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:13.746 23:44:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:13.746 23:44:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.746 23:44:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:13.746 23:44:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:13.746 23:44:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:13.746 23:44:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:13.746 23:44:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:13.746 23:44:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:13.746 23:44:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:13.746 No valid GPT data, bailing 00:02:13.746 23:44:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:13.746 23:44:14 -- scripts/common.sh@391 -- # pt= 00:02:13.746 23:44:14 -- scripts/common.sh@392 -- # return 1 00:02:13.746 23:44:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:13.746 1+0 records in 00:02:13.746 1+0 records out 00:02:13.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00559429 s, 187 MB/s 00:02:13.746 23:44:14 -- spdk/autotest.sh@118 -- # sync 00:02:13.746 23:44:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:13.746 23:44:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:13.746 23:44:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:20.320 23:44:20 -- spdk/autotest.sh@124 -- # uname -s 00:02:20.320 23:44:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:20.320 23:44:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.320 23:44:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:20.320 23:44:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:20.320 23:44:20 -- common/autotest_common.sh@10 -- # set +x 00:02:20.579 ************************************ 00:02:20.579 START TEST setup.sh 00:02:20.579 ************************************ 00:02:20.579 23:44:20 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.579 * Looking for test storage... 00:02:20.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:20.579 23:44:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:20.579 23:44:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:20.579 23:44:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:20.579 23:44:21 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:20.579 23:44:21 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:20.579 23:44:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:20.579 ************************************ 00:02:20.579 START TEST acl 00:02:20.579 ************************************ 00:02:20.579 23:44:21 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:20.837 * Looking for test storage... 00:02:20.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.837 23:44:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:20.837 23:44:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:20.837 23:44:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.837 23:44:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.131 23:44:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:24.131 23:44:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:24.131 23:44:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.131 23:44:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:24.131 23:44:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.131 23:44:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.671 Hugepages 00:02:26.671 node hugesize free / total 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 00:02:26.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.671 23:44:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.671 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.672 23:44:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.672 23:44:27 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:26.672 23:44:27 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:26.672 23:44:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.672 ************************************ 00:02:26.672 START TEST denied 00:02:26.672 ************************************ 00:02:26.672 23:44:27 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:26.672 23:44:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:26.672 23:44:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:26.672 23:44:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:26.672 23:44:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.672 23:44:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:30.900 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.900 23:44:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.098 00:02:35.098 real 0m8.077s 00:02:35.098 user 0m2.579s 00:02:35.098 sys 0m4.874s 00:02:35.098 23:44:35 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:35.098 23:44:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:35.098 ************************************ 00:02:35.098 END TEST denied 00:02:35.098 ************************************ 00:02:35.098 23:44:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:35.098 23:44:35 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:35.098 23:44:35 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:35.098 23:44:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:35.098 ************************************ 00:02:35.098 START TEST allowed 00:02:35.098 ************************************ 00:02:35.098 23:44:35 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:35.098 23:44:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:35.098 23:44:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:35.098 23:44:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:35.098 23:44:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.098 23:44:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:40.377 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:40.377 23:44:40 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:40.377 23:44:40 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:40.377 23:44:40 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:40.377 23:44:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.377 23:44:40 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.669 00:02:43.669 real 0m8.586s 00:02:43.669 user 0m2.384s 00:02:43.669 sys 0m4.706s 00:02:43.669 23:44:44 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:43.669 23:44:44 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:43.669 ************************************ 00:02:43.669 END TEST allowed 00:02:43.669 ************************************ 00:02:43.669 00:02:43.669 real 0m22.953s 00:02:43.669 user 0m6.963s 00:02:43.669 sys 0m13.949s 00:02:43.669 23:44:44 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:43.669 23:44:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:43.669 ************************************ 00:02:43.669 END TEST acl 00:02:43.669 ************************************ 00:02:43.669 23:44:44 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.669 23:44:44 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:43.669 23:44:44 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:43.669 23:44:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.669 ************************************ 00:02:43.669 START TEST hugepages 00:02:43.669 ************************************ 00:02:43.669 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.669 * Looking for test storage... 00:02:43.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.669 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 37954076 kB' 'MemAvailable: 42626068 kB' 'Buffers: 2696 kB' 'Cached: 14301048 kB' 'SwapCached: 0 kB' 'Active: 10336700 kB' 'Inactive: 4455220 kB' 'Active(anon): 9770248 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491636 kB' 'Mapped: 228524 kB' 'Shmem: 9282072 kB' 'KReclaimable: 294920 kB' 'Slab: 930156 kB' 'SReclaimable: 294920 kB' 'SUnreclaim: 635236 kB' 'KernelStack: 22032 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 11128672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.670 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.671 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.672 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:43.930 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:43.931 23:44:44 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:43.931 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:43.931 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:43.931 23:44:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.931 ************************************ 00:02:43.931 START TEST default_setup 00:02:43.931 ************************************ 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.931 23:44:44 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.220 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.220 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:48.602 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40094204 kB' 'MemAvailable: 44766192 kB' 'Buffers: 2696 kB' 'Cached: 14301184 kB' 'SwapCached: 0 kB' 'Active: 10354092 kB' 'Inactive: 4455220 kB' 'Active(anon): 9787640 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508908 kB' 'Mapped: 228688 kB' 'Shmem: 9282208 kB' 'KReclaimable: 294912 kB' 'Slab: 928580 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633668 kB' 'KernelStack: 22080 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11145168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:48.602 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.603 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40095824 kB' 'MemAvailable: 44767812 kB' 'Buffers: 2696 kB' 'Cached: 14301188 kB' 'SwapCached: 0 kB' 'Active: 10354920 kB' 'Inactive: 4455220 kB' 'Active(anon): 9788468 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509996 kB' 'Mapped: 228644 kB' 'Shmem: 9282212 kB' 'KReclaimable: 294912 kB' 'Slab: 928584 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633672 kB' 'KernelStack: 22256 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11144940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.604 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.605 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.868 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40097804 kB' 'MemAvailable: 44769792 kB' 'Buffers: 2696 kB' 'Cached: 14301204 kB' 'SwapCached: 0 kB' 'Active: 10355664 kB' 'Inactive: 4455220 kB' 'Active(anon): 9789212 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510564 kB' 'Mapped: 228644 kB' 'Shmem: 9282228 kB' 'KReclaimable: 294912 kB' 'Slab: 928556 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633644 kB' 'KernelStack: 22576 kB' 'PageTables: 10108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11145208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.869 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.870 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.871 nr_hugepages=1024 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.871 resv_hugepages=0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.871 surplus_hugepages=0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.871 anon_hugepages=0 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40096496 kB' 'MemAvailable: 44768484 kB' 'Buffers: 2696 kB' 'Cached: 14301224 kB' 'SwapCached: 0 kB' 'Active: 10355272 kB' 'Inactive: 4455220 kB' 'Active(anon): 9788820 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510008 kB' 'Mapped: 228636 kB' 'Shmem: 9282248 kB' 'KReclaimable: 294912 kB' 'Slab: 928556 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633644 kB' 'KernelStack: 22528 kB' 'PageTables: 9916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11143712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.871 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.872 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18344952 kB' 'MemUsed: 14294188 kB' 'SwapCached: 0 kB' 'Active: 6274580 kB' 'Inactive: 4307464 kB' 'Active(anon): 5985516 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10248996 kB' 'Mapped: 101852 kB' 'AnonPages: 336360 kB' 'Shmem: 5652468 kB' 'KernelStack: 13784 kB' 'PageTables: 6280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 515544 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.873 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.874 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.875 node0=1024 expecting 1024 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.875 00:02:48.875 real 0m4.998s 00:02:48.875 user 0m1.362s 00:02:48.875 sys 0m2.235s 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:48.875 23:44:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:48.875 ************************************ 00:02:48.875 END TEST default_setup 00:02:48.875 ************************************ 00:02:48.875 23:44:49 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:48.875 23:44:49 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:48.875 23:44:49 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:48.875 23:44:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.875 ************************************ 00:02:48.875 START TEST per_node_1G_alloc 00:02:48.875 ************************************ 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.875 23:44:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.167 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.167 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.434 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40134520 kB' 'MemAvailable: 44806508 kB' 'Buffers: 2696 kB' 'Cached: 14301332 kB' 'SwapCached: 0 kB' 'Active: 10353156 kB' 'Inactive: 4455220 kB' 'Active(anon): 9786704 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507700 kB' 'Mapped: 227556 kB' 'Shmem: 9282356 kB' 'KReclaimable: 294912 kB' 'Slab: 929008 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634096 kB' 'KernelStack: 22032 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11135468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216616 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.435 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40135800 kB' 'MemAvailable: 44807788 kB' 'Buffers: 2696 kB' 'Cached: 14301336 kB' 'SwapCached: 0 kB' 'Active: 10352664 kB' 'Inactive: 4455220 kB' 'Active(anon): 9786212 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507192 kB' 'Mapped: 227552 kB' 'Shmem: 9282360 kB' 'KReclaimable: 294912 kB' 'Slab: 929044 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634132 kB' 'KernelStack: 22064 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11135488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.436 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.437 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40135456 kB' 'MemAvailable: 44807444 kB' 'Buffers: 2696 kB' 'Cached: 14301372 kB' 'SwapCached: 0 kB' 'Active: 10352344 kB' 'Inactive: 4455220 kB' 'Active(anon): 9785892 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506816 kB' 'Mapped: 227552 kB' 'Shmem: 9282396 kB' 'KReclaimable: 294912 kB' 'Slab: 929044 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634132 kB' 'KernelStack: 22048 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11135508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.438 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.439 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.440 nr_hugepages=1024 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.440 resv_hugepages=0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.440 surplus_hugepages=0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.440 anon_hugepages=0 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40134700 kB' 'MemAvailable: 44806688 kB' 'Buffers: 2696 kB' 'Cached: 14301376 kB' 'SwapCached: 0 kB' 'Active: 10352708 kB' 'Inactive: 4455220 kB' 'Active(anon): 9786256 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507196 kB' 'Mapped: 227552 kB' 'Shmem: 9282400 kB' 'KReclaimable: 294912 kB' 'Slab: 929044 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634132 kB' 'KernelStack: 22064 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11135532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.440 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.441 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19382276 kB' 'MemUsed: 13256864 kB' 'SwapCached: 0 kB' 'Active: 6274556 kB' 'Inactive: 4307464 kB' 'Active(anon): 5985492 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249040 kB' 'Mapped: 100764 kB' 'AnonPages: 336168 kB' 'Shmem: 5652512 kB' 'KernelStack: 13368 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 516268 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.442 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.443 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20751276 kB' 'MemUsed: 6904792 kB' 'SwapCached: 0 kB' 'Active: 4078084 kB' 'Inactive: 147756 kB' 'Active(anon): 3800696 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 147756 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4055072 kB' 'Mapped: 126788 kB' 'AnonPages: 170904 kB' 'Shmem: 3629928 kB' 'KernelStack: 8680 kB' 'PageTables: 3316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102564 kB' 'Slab: 412776 kB' 'SReclaimable: 102564 kB' 'SUnreclaim: 310212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.444 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.445 node0=512 expecting 512 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:52.445 node1=512 expecting 512 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:52.445 00:02:52.445 real 0m3.596s 00:02:52.445 user 0m1.349s 00:02:52.445 sys 0m2.282s 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:52.445 23:44:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.445 ************************************ 00:02:52.445 END TEST per_node_1G_alloc 00:02:52.445 ************************************ 00:02:52.706 23:44:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:52.706 23:44:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:52.706 23:44:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:52.706 23:44:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.706 ************************************ 00:02:52.706 START TEST even_2G_alloc 00:02:52.706 ************************************ 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.706 23:44:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.072 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.072 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40124624 kB' 'MemAvailable: 44796612 kB' 'Buffers: 2696 kB' 'Cached: 14301496 kB' 'SwapCached: 0 kB' 'Active: 10353780 kB' 'Inactive: 4455220 kB' 'Active(anon): 9787328 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508044 kB' 'Mapped: 227572 kB' 'Shmem: 9282520 kB' 'KReclaimable: 294912 kB' 'Slab: 928820 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633908 kB' 'KernelStack: 22096 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11136148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.072 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.073 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40125140 kB' 'MemAvailable: 44797128 kB' 'Buffers: 2696 kB' 'Cached: 14301508 kB' 'SwapCached: 0 kB' 'Active: 10353480 kB' 'Inactive: 4455220 kB' 'Active(anon): 9787028 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507760 kB' 'Mapped: 227564 kB' 'Shmem: 9282532 kB' 'KReclaimable: 294912 kB' 'Slab: 928812 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633900 kB' 'KernelStack: 22064 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11136168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.074 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.075 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40124384 kB' 'MemAvailable: 44796372 kB' 'Buffers: 2696 kB' 'Cached: 14301508 kB' 'SwapCached: 0 kB' 'Active: 10353660 kB' 'Inactive: 4455220 kB' 'Active(anon): 9787208 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507960 kB' 'Mapped: 227564 kB' 'Shmem: 9282532 kB' 'KReclaimable: 294912 kB' 'Slab: 928812 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633900 kB' 'KernelStack: 22048 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11136060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.076 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.077 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.078 nr_hugepages=1024 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.078 resv_hugepages=0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.078 surplus_hugepages=0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.078 anon_hugepages=0 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.078 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40126532 kB' 'MemAvailable: 44798520 kB' 'Buffers: 2696 kB' 'Cached: 14301540 kB' 'SwapCached: 0 kB' 'Active: 10357428 kB' 'Inactive: 4455220 kB' 'Active(anon): 9790976 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511784 kB' 'Mapped: 228068 kB' 'Shmem: 9282564 kB' 'KReclaimable: 294912 kB' 'Slab: 928812 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633900 kB' 'KernelStack: 22064 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11141488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.341 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19373804 kB' 'MemUsed: 13265336 kB' 'SwapCached: 0 kB' 'Active: 6276512 kB' 'Inactive: 4307464 kB' 'Active(anon): 5987448 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249068 kB' 'Mapped: 100916 kB' 'AnonPages: 338268 kB' 'Shmem: 5652540 kB' 'KernelStack: 13368 kB' 'PageTables: 5496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 515968 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.342 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20764532 kB' 'MemUsed: 6891536 kB' 'SwapCached: 0 kB' 'Active: 4078644 kB' 'Inactive: 147756 kB' 'Active(anon): 3801256 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 147756 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4055208 kB' 'Mapped: 127316 kB' 'AnonPages: 171264 kB' 'Shmem: 3630064 kB' 'KernelStack: 8712 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102564 kB' 'Slab: 412852 kB' 'SReclaimable: 102564 kB' 'SUnreclaim: 310288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.343 node0=512 expecting 512 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.343 node1=512 expecting 512 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.343 00:02:56.343 real 0m3.685s 00:02:56.343 user 0m1.423s 00:02:56.343 sys 0m2.319s 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.343 23:44:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.343 ************************************ 00:02:56.343 END TEST even_2G_alloc 00:02:56.343 ************************************ 00:02:56.343 23:44:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:56.343 23:44:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.343 23:44:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.343 23:44:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.343 ************************************ 00:02:56.343 START TEST odd_alloc 00:02:56.343 ************************************ 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:56.343 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:56.344 23:44:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:56.344 23:44:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.344 23:44:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.637 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.637 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.902 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107136 kB' 'MemAvailable: 44779124 kB' 'Buffers: 2696 kB' 'Cached: 14301656 kB' 'SwapCached: 0 kB' 'Active: 10360940 kB' 'Inactive: 4455220 kB' 'Active(anon): 9794488 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515416 kB' 'Mapped: 228336 kB' 'Shmem: 9282680 kB' 'KReclaimable: 294912 kB' 'Slab: 928704 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633792 kB' 'KernelStack: 22208 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11158152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.903 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40100308 kB' 'MemAvailable: 44772296 kB' 'Buffers: 2696 kB' 'Cached: 14301676 kB' 'SwapCached: 0 kB' 'Active: 10364424 kB' 'Inactive: 4455220 kB' 'Active(anon): 9797972 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518792 kB' 'Mapped: 228080 kB' 'Shmem: 9282700 kB' 'KReclaimable: 294912 kB' 'Slab: 928712 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633800 kB' 'KernelStack: 22192 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11147756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216380 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.904 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.905 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40106860 kB' 'MemAvailable: 44778848 kB' 'Buffers: 2696 kB' 'Cached: 14301676 kB' 'SwapCached: 0 kB' 'Active: 10359044 kB' 'Inactive: 4455220 kB' 'Active(anon): 9792592 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513408 kB' 'Mapped: 227924 kB' 'Shmem: 9282700 kB' 'KReclaimable: 294912 kB' 'Slab: 928712 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633800 kB' 'KernelStack: 22208 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11141296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216392 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.906 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.907 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:59.908 nr_hugepages=1025 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.908 resv_hugepages=0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.908 surplus_hugepages=0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.908 anon_hugepages=0 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.908 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107972 kB' 'MemAvailable: 44779960 kB' 'Buffers: 2696 kB' 'Cached: 14301712 kB' 'SwapCached: 0 kB' 'Active: 10364400 kB' 'Inactive: 4455220 kB' 'Active(anon): 9797948 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518664 kB' 'Mapped: 228492 kB' 'Shmem: 9282736 kB' 'KReclaimable: 294912 kB' 'Slab: 928712 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633800 kB' 'KernelStack: 22192 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11147800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216396 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.909 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.910 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19359808 kB' 'MemUsed: 13279332 kB' 'SwapCached: 0 kB' 'Active: 6280856 kB' 'Inactive: 4307464 kB' 'Active(anon): 5991792 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249124 kB' 'Mapped: 100768 kB' 'AnonPages: 342584 kB' 'Shmem: 5652596 kB' 'KernelStack: 13496 kB' 'PageTables: 5776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 515788 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.911 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.912 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20753448 kB' 'MemUsed: 6902620 kB' 'SwapCached: 0 kB' 'Active: 4079428 kB' 'Inactive: 147756 kB' 'Active(anon): 3802040 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 147756 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4055324 kB' 'Mapped: 127572 kB' 'AnonPages: 171992 kB' 'Shmem: 3630180 kB' 'KernelStack: 8712 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102564 kB' 'Slab: 412924 kB' 'SReclaimable: 102564 kB' 'SUnreclaim: 310360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.175 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.176 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:00.177 node0=512 expecting 513 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:00.177 node1=513 expecting 512 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:00.177 00:03:00.177 real 0m3.685s 00:03:00.177 user 0m1.417s 00:03:00.177 sys 0m2.332s 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.177 23:45:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:00.177 ************************************ 00:03:00.177 END TEST odd_alloc 00:03:00.177 ************************************ 00:03:00.177 23:45:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:00.177 23:45:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.177 23:45:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.177 23:45:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.177 ************************************ 00:03:00.177 START TEST custom_alloc 00:03:00.177 ************************************ 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.177 23:45:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.476 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.476 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39076940 kB' 'MemAvailable: 43748928 kB' 'Buffers: 2696 kB' 'Cached: 14301824 kB' 'SwapCached: 0 kB' 'Active: 10361900 kB' 'Inactive: 4455220 kB' 'Active(anon): 9795448 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515352 kB' 'Mapped: 228600 kB' 'Shmem: 9282848 kB' 'KReclaimable: 294912 kB' 'Slab: 928492 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633580 kB' 'KernelStack: 22160 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11146444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216620 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.476 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.477 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39077240 kB' 'MemAvailable: 43749228 kB' 'Buffers: 2696 kB' 'Cached: 14301828 kB' 'SwapCached: 0 kB' 'Active: 10361424 kB' 'Inactive: 4455220 kB' 'Active(anon): 9794972 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515352 kB' 'Mapped: 228512 kB' 'Shmem: 9282852 kB' 'KReclaimable: 294912 kB' 'Slab: 928480 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633568 kB' 'KernelStack: 22112 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11146464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216588 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.478 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.479 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39078096 kB' 'MemAvailable: 43750084 kB' 'Buffers: 2696 kB' 'Cached: 14301856 kB' 'SwapCached: 0 kB' 'Active: 10361828 kB' 'Inactive: 4455220 kB' 'Active(anon): 9795376 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515728 kB' 'Mapped: 228512 kB' 'Shmem: 9282880 kB' 'KReclaimable: 294912 kB' 'Slab: 928480 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633568 kB' 'KernelStack: 22160 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11146988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216604 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.480 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.481 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:03.482 nr_hugepages=1536 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.482 resv_hugepages=0 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.482 surplus_hugepages=0 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.482 anon_hugepages=0 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39078800 kB' 'MemAvailable: 43750788 kB' 'Buffers: 2696 kB' 'Cached: 14301896 kB' 'SwapCached: 0 kB' 'Active: 10361500 kB' 'Inactive: 4455220 kB' 'Active(anon): 9795048 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515368 kB' 'Mapped: 228512 kB' 'Shmem: 9282920 kB' 'KReclaimable: 294912 kB' 'Slab: 928480 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633568 kB' 'KernelStack: 22144 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11147008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216604 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.482 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.483 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19356396 kB' 'MemUsed: 13282744 kB' 'SwapCached: 0 kB' 'Active: 6277736 kB' 'Inactive: 4307464 kB' 'Active(anon): 5988672 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249148 kB' 'Mapped: 100764 kB' 'AnonPages: 339132 kB' 'Shmem: 5652620 kB' 'KernelStack: 13400 kB' 'PageTables: 5536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 515504 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.484 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19720136 kB' 'MemUsed: 7935932 kB' 'SwapCached: 0 kB' 'Active: 4080432 kB' 'Inactive: 147756 kB' 'Active(anon): 3803044 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 147756 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4055464 kB' 'Mapped: 127588 kB' 'AnonPages: 172868 kB' 'Shmem: 3630320 kB' 'KernelStack: 8728 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102564 kB' 'Slab: 412976 kB' 'SReclaimable: 102564 kB' 'SUnreclaim: 310412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.485 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.486 node0=512 expecting 512 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:03.486 node1=1024 expecting 1024 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:03.486 00:03:03.486 real 0m3.318s 00:03:03.486 user 0m1.151s 00:03:03.486 sys 0m2.112s 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:03.486 23:45:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:03.486 ************************************ 00:03:03.486 END TEST custom_alloc 00:03:03.486 ************************************ 00:03:03.486 23:45:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:03.486 23:45:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.486 23:45:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.486 23:45:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.486 ************************************ 00:03:03.486 START TEST no_shrink_alloc 00:03:03.486 ************************************ 00:03:03.486 23:45:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:03.486 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:03.486 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.486 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.487 23:45:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.785 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.785 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40095484 kB' 'MemAvailable: 44767472 kB' 'Buffers: 2696 kB' 'Cached: 14301976 kB' 'SwapCached: 0 kB' 'Active: 10358640 kB' 'Inactive: 4455220 kB' 'Active(anon): 9792188 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512460 kB' 'Mapped: 228108 kB' 'Shmem: 9283000 kB' 'KReclaimable: 294912 kB' 'Slab: 929228 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634316 kB' 'KernelStack: 22080 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11142000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.785 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.786 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40093912 kB' 'MemAvailable: 44765900 kB' 'Buffers: 2696 kB' 'Cached: 14301976 kB' 'SwapCached: 0 kB' 'Active: 10362764 kB' 'Inactive: 4455220 kB' 'Active(anon): 9796312 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515144 kB' 'Mapped: 228096 kB' 'Shmem: 9283000 kB' 'KReclaimable: 294912 kB' 'Slab: 929228 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634316 kB' 'KernelStack: 22160 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11161624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216604 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.787 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.788 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40091212 kB' 'MemAvailable: 44763200 kB' 'Buffers: 2696 kB' 'Cached: 14301976 kB' 'SwapCached: 0 kB' 'Active: 10355648 kB' 'Inactive: 4455220 kB' 'Active(anon): 9789196 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509500 kB' 'Mapped: 227652 kB' 'Shmem: 9283000 kB' 'KReclaimable: 294912 kB' 'Slab: 929264 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634352 kB' 'KernelStack: 22096 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11138580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.789 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.790 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.791 nr_hugepages=1024 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.791 resv_hugepages=0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.791 surplus_hugepages=0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.791 anon_hugepages=0 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40091776 kB' 'MemAvailable: 44763764 kB' 'Buffers: 2696 kB' 'Cached: 14302020 kB' 'SwapCached: 0 kB' 'Active: 10355228 kB' 'Inactive: 4455220 kB' 'Active(anon): 9788776 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509040 kB' 'Mapped: 227592 kB' 'Shmem: 9283044 kB' 'KReclaimable: 294912 kB' 'Slab: 929272 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 634360 kB' 'KernelStack: 22064 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11137832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.791 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.792 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18322172 kB' 'MemUsed: 14316968 kB' 'SwapCached: 0 kB' 'Active: 6275144 kB' 'Inactive: 4307464 kB' 'Active(anon): 5986080 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249220 kB' 'Mapped: 100756 kB' 'AnonPages: 336588 kB' 'Shmem: 5652692 kB' 'KernelStack: 13352 kB' 'PageTables: 5376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 516116 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 323768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.793 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.794 node0=1024 expecting 1024 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.794 23:45:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.099 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.099 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.099 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40127048 kB' 'MemAvailable: 44799036 kB' 'Buffers: 2696 kB' 'Cached: 14302128 kB' 'SwapCached: 0 kB' 'Active: 10360000 kB' 'Inactive: 4455220 kB' 'Active(anon): 9793548 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513928 kB' 'Mapped: 227632 kB' 'Shmem: 9283152 kB' 'KReclaimable: 294912 kB' 'Slab: 928328 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633416 kB' 'KernelStack: 22128 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11138632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.099 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.100 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40126236 kB' 'MemAvailable: 44798224 kB' 'Buffers: 2696 kB' 'Cached: 14302132 kB' 'SwapCached: 0 kB' 'Active: 10359804 kB' 'Inactive: 4455220 kB' 'Active(anon): 9793352 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513804 kB' 'Mapped: 227608 kB' 'Shmem: 9283156 kB' 'KReclaimable: 294912 kB' 'Slab: 928308 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633396 kB' 'KernelStack: 22128 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11138652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.101 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.102 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40126080 kB' 'MemAvailable: 44798068 kB' 'Buffers: 2696 kB' 'Cached: 14302148 kB' 'SwapCached: 0 kB' 'Active: 10359092 kB' 'Inactive: 4455220 kB' 'Active(anon): 9792640 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513036 kB' 'Mapped: 227608 kB' 'Shmem: 9283172 kB' 'KReclaimable: 294912 kB' 'Slab: 928388 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633476 kB' 'KernelStack: 22032 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11138672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.103 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.104 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.105 nr_hugepages=1024 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.105 resv_hugepages=0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.105 surplus_hugepages=0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.105 anon_hugepages=0 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40125828 kB' 'MemAvailable: 44797816 kB' 'Buffers: 2696 kB' 'Cached: 14302168 kB' 'SwapCached: 0 kB' 'Active: 10359548 kB' 'Inactive: 4455220 kB' 'Active(anon): 9793096 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513504 kB' 'Mapped: 227608 kB' 'Shmem: 9283192 kB' 'KReclaimable: 294912 kB' 'Slab: 928388 kB' 'SReclaimable: 294912 kB' 'SUnreclaim: 633476 kB' 'KernelStack: 22080 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11138696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3196276 kB' 'DirectMap2M: 17461248 kB' 'DirectMap1G: 48234496 kB' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.105 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.106 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18350116 kB' 'MemUsed: 14289024 kB' 'SwapCached: 0 kB' 'Active: 6277116 kB' 'Inactive: 4307464 kB' 'Active(anon): 5988052 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4307464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10249308 kB' 'Mapped: 100764 kB' 'AnonPages: 338648 kB' 'Shmem: 5652780 kB' 'KernelStack: 13352 kB' 'PageTables: 5304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192348 kB' 'Slab: 515276 kB' 'SReclaimable: 192348 kB' 'SUnreclaim: 322928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.107 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.108 node0=1024 expecting 1024 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.108 00:03:10.108 real 0m6.274s 00:03:10.108 user 0m2.159s 00:03:10.108 sys 0m4.104s 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.108 23:45:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.108 ************************************ 00:03:10.108 END TEST no_shrink_alloc 00:03:10.108 ************************************ 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.108 23:45:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.108 00:03:10.108 real 0m26.220s 00:03:10.108 user 0m9.111s 00:03:10.108 sys 0m15.816s 00:03:10.108 23:45:10 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:10.108 23:45:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.108 ************************************ 00:03:10.108 END TEST hugepages 00:03:10.108 ************************************ 00:03:10.108 23:45:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:10.108 23:45:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:10.108 23:45:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:10.108 23:45:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.108 ************************************ 00:03:10.108 START TEST driver 00:03:10.108 ************************************ 00:03:10.108 23:45:10 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:10.108 * Looking for test storage... 00:03:10.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.108 23:45:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:10.108 23:45:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.108 23:45:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.305 23:45:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:14.305 23:45:14 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:14.305 23:45:14 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:14.305 23:45:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.305 ************************************ 00:03:14.305 START TEST guess_driver 00:03:14.305 ************************************ 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:14.305 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:14.305 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:14.306 Looking for driver=vfio-pci 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.306 23:45:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.597 23:45:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.976 23:45:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.248 00:03:24.248 real 0m9.454s 00:03:24.248 user 0m2.323s 00:03:24.248 sys 0m4.788s 00:03:24.248 23:45:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.248 23:45:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.248 ************************************ 00:03:24.248 END TEST guess_driver 00:03:24.248 ************************************ 00:03:24.248 00:03:24.248 real 0m13.799s 00:03:24.248 user 0m3.369s 00:03:24.248 sys 0m7.124s 00:03:24.248 23:45:24 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.248 23:45:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.248 ************************************ 00:03:24.248 END TEST driver 00:03:24.248 ************************************ 00:03:24.248 23:45:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.248 23:45:24 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.248 23:45:24 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.248 23:45:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.248 ************************************ 00:03:24.248 START TEST devices 00:03:24.248 ************************************ 00:03:24.248 23:45:24 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.248 * Looking for test storage... 00:03:24.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.248 23:45:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:24.248 23:45:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:24.248 23:45:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.248 23:45:24 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:27.540 23:45:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:27.540 No valid GPT data, bailing 00:03:27.540 23:45:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:27.540 23:45:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:27.540 23:45:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.540 23:45:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.540 ************************************ 00:03:27.540 START TEST nvme_mount 00:03:27.540 ************************************ 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:27.540 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:27.541 23:45:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:28.921 Creating new GPT entries in memory. 00:03:28.921 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:28.921 other utilities. 00:03:28.921 23:45:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:28.921 23:45:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.921 23:45:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:28.921 23:45:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:28.921 23:45:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:29.860 Creating new GPT entries in memory. 00:03:29.860 The operation has completed successfully. 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3377807 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:29.860 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.861 23:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.156 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.156 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:33.156 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:33.156 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.156 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.156 23:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.447 23:45:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.787 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.788 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.788 00:03:39.788 real 0m12.171s 00:03:39.788 user 0m3.466s 00:03:39.788 sys 0m6.566s 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:39.788 23:45:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:39.788 ************************************ 00:03:39.788 END TEST nvme_mount 00:03:39.788 ************************************ 00:03:39.788 23:45:40 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:39.788 23:45:40 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.788 23:45:40 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.788 23:45:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.048 ************************************ 00:03:40.048 START TEST dm_mount 00:03:40.048 ************************************ 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.048 23:45:40 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:41.015 Creating new GPT entries in memory. 00:03:41.015 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.015 other utilities. 00:03:41.015 23:45:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.015 23:45:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.015 23:45:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.015 23:45:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.015 23:45:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.953 Creating new GPT entries in memory. 00:03:41.953 The operation has completed successfully. 00:03:41.953 23:45:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:41.953 23:45:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.953 23:45:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.953 23:45:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.953 23:45:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:42.939 The operation has completed successfully. 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3382226 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.939 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.940 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.211 23:45:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:46.501 23:45:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.501 23:45:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:49.786 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:49.786 00:03:49.786 real 0m9.903s 00:03:49.786 user 0m2.412s 00:03:49.786 sys 0m4.532s 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.786 23:45:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:49.786 ************************************ 00:03:49.786 END TEST dm_mount 00:03:49.786 ************************************ 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.786 23:45:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.044 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.044 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.044 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.044 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.044 23:45:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.044 00:03:50.044 real 0m26.314s 00:03:50.044 user 0m7.350s 00:03:50.044 sys 0m13.765s 00:03:50.044 23:45:50 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.045 23:45:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.045 ************************************ 00:03:50.045 END TEST devices 00:03:50.045 ************************************ 00:03:50.302 00:03:50.302 real 1m29.721s 00:03:50.302 user 0m26.942s 00:03:50.302 sys 0m50.959s 00:03:50.302 23:45:50 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.302 23:45:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.302 ************************************ 00:03:50.302 END TEST setup.sh 00:03:50.302 ************************************ 00:03:50.302 23:45:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.590 Hugepages 00:03:53.590 node hugesize free / total 00:03:53.590 node0 1048576kB 0 / 0 00:03:53.590 node0 2048kB 2048 / 2048 00:03:53.590 node1 1048576kB 0 / 0 00:03:53.590 node1 2048kB 0 / 0 00:03:53.590 00:03:53.590 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.590 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:53.590 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:53.590 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.590 23:45:53 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.590 23:45:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.590 23:45:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.590 23:45:53 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.881 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.881 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.288 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.288 23:45:58 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:59.225 23:45:59 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:59.225 23:45:59 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:59.225 23:45:59 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.225 23:45:59 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:59.225 23:45:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:59.225 23:45:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:59.225 23:45:59 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.225 23:45:59 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.225 23:45:59 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:59.484 23:45:59 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:59.484 23:45:59 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:03:59.484 23:45:59 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.775 Waiting for block devices as requested 00:04:02.775 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:02.775 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:03.034 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:03.034 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:03.034 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:03.293 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.293 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:03.293 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:03.293 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:03.552 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.552 23:46:04 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:03.552 23:46:04 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:03.552 23:46:04 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:03.552 23:46:04 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:04:03.552 23:46:04 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:03.552 23:46:04 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:03.812 23:46:04 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:03.812 23:46:04 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:03.812 23:46:04 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:03.812 23:46:04 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:03.812 23:46:04 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:03.812 23:46:04 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:03.812 23:46:04 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:03.812 23:46:04 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:03.812 23:46:04 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:03.812 23:46:04 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:03.812 23:46:04 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:03.812 23:46:04 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:03.812 23:46:04 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:03.812 23:46:04 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:03.812 23:46:04 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:03.812 23:46:04 -- common/autotest_common.sh@1553 -- # continue 00:04:03.812 23:46:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.812 23:46:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.812 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.812 23:46:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.812 23:46:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:03.812 23:46:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.812 23:46:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.099 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.099 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.003 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.003 23:46:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:09.003 23:46:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.003 23:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:09.003 23:46:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:09.003 23:46:09 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:09.003 23:46:09 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:09.003 23:46:09 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:09.003 23:46:09 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:09.003 23:46:09 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:09.003 23:46:09 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:09.003 23:46:09 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:09.003 23:46:09 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.003 23:46:09 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:09.003 23:46:09 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:09.003 23:46:09 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:09.003 23:46:09 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:09.003 23:46:09 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:09.003 23:46:09 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:09.003 23:46:09 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:09.003 23:46:09 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:09.003 23:46:09 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:09.003 23:46:09 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:04:09.003 23:46:09 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:04:09.003 23:46:09 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3392012 00:04:09.003 23:46:09 -- common/autotest_common.sh@1594 -- # waitforlisten 3392012 00:04:09.003 23:46:09 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.003 23:46:09 -- common/autotest_common.sh@827 -- # '[' -z 3392012 ']' 00:04:09.003 23:46:09 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.003 23:46:09 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:09.003 23:46:09 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.003 23:46:09 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:09.003 23:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:09.263 [2024-05-14 23:46:09.643050] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:09.263 [2024-05-14 23:46:09.643100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392012 ] 00:04:09.263 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.263 [2024-05-14 23:46:09.712034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.263 [2024-05-14 23:46:09.785853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.200 23:46:10 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:10.200 23:46:10 -- common/autotest_common.sh@860 -- # return 0 00:04:10.200 23:46:10 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:10.200 23:46:10 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:10.200 23:46:10 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:13.492 nvme0n1 00:04:13.492 23:46:13 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:13.492 [2024-05-14 23:46:13.587948] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:13.492 request: 00:04:13.492 { 00:04:13.492 "nvme_ctrlr_name": "nvme0", 00:04:13.492 "password": "test", 00:04:13.492 "method": "bdev_nvme_opal_revert", 00:04:13.492 "req_id": 1 00:04:13.492 } 00:04:13.492 Got JSON-RPC error response 00:04:13.492 response: 00:04:13.492 { 00:04:13.492 "code": -32602, 00:04:13.492 "message": "Invalid parameters" 00:04:13.492 } 00:04:13.492 23:46:13 -- common/autotest_common.sh@1600 -- # true 00:04:13.492 23:46:13 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:13.492 23:46:13 -- common/autotest_common.sh@1604 -- # killprocess 3392012 00:04:13.492 23:46:13 -- common/autotest_common.sh@946 -- # '[' -z 3392012 ']' 00:04:13.492 23:46:13 -- common/autotest_common.sh@950 -- # kill -0 3392012 00:04:13.492 23:46:13 -- common/autotest_common.sh@951 -- # uname 00:04:13.492 23:46:13 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:13.492 23:46:13 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3392012 00:04:13.492 23:46:13 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:13.492 23:46:13 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:13.492 23:46:13 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3392012' 00:04:13.492 killing process with pid 3392012 00:04:13.492 23:46:13 -- common/autotest_common.sh@965 -- # kill 3392012 00:04:13.492 23:46:13 -- common/autotest_common.sh@970 -- # wait 3392012 00:04:15.398 23:46:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:15.398 23:46:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:15.398 23:46:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.398 23:46:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:15.398 23:46:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:15.398 23:46:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:15.398 23:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:15.398 23:46:15 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.398 23:46:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.398 23:46:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.398 23:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:15.398 ************************************ 00:04:15.398 START TEST env 00:04:15.398 ************************************ 00:04:15.398 23:46:15 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.657 * Looking for test storage... 00:04:15.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:15.657 23:46:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.657 23:46:16 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.657 23:46:16 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.657 23:46:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.657 ************************************ 00:04:15.657 START TEST env_memory 00:04:15.657 ************************************ 00:04:15.657 23:46:16 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.657 00:04:15.657 00:04:15.657 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.657 http://cunit.sourceforge.net/ 00:04:15.657 00:04:15.657 00:04:15.657 Suite: memory 00:04:15.657 Test: alloc and free memory map ...[2024-05-14 23:46:16.096407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.657 passed 00:04:15.657 Test: mem map translation ...[2024-05-14 23:46:16.114155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.657 [2024-05-14 23:46:16.114171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.657 [2024-05-14 23:46:16.114208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.657 [2024-05-14 23:46:16.114217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.657 passed 00:04:15.657 Test: mem map registration ...[2024-05-14 23:46:16.149203] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:15.657 [2024-05-14 23:46:16.149218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:15.657 passed 00:04:15.657 Test: mem map adjacent registrations ...passed 00:04:15.657 00:04:15.657 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.657 suites 1 1 n/a 0 0 00:04:15.657 tests 4 4 4 0 0 00:04:15.657 asserts 152 152 152 0 n/a 00:04:15.657 00:04:15.657 Elapsed time = 0.129 seconds 00:04:15.657 00:04:15.657 real 0m0.137s 00:04:15.657 user 0m0.127s 00:04:15.657 sys 0m0.010s 00:04:15.657 23:46:16 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.657 23:46:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.657 ************************************ 00:04:15.657 END TEST env_memory 00:04:15.657 ************************************ 00:04:15.657 23:46:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.657 23:46:16 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.657 23:46:16 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.657 23:46:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.918 ************************************ 00:04:15.918 START TEST env_vtophys 00:04:15.918 ************************************ 00:04:15.918 23:46:16 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.918 EAL: lib.eal log level changed from notice to debug 00:04:15.918 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.918 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.918 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.918 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.918 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.918 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.918 EAL: Detected lcore 6 as core 6 on socket 0 00:04:15.918 EAL: Detected lcore 7 as core 8 on socket 0 00:04:15.918 EAL: Detected lcore 8 as core 9 on socket 0 00:04:15.918 EAL: Detected lcore 9 as core 10 on socket 0 00:04:15.918 EAL: Detected lcore 10 as core 11 on socket 0 00:04:15.918 EAL: Detected lcore 11 as core 12 on socket 0 00:04:15.918 EAL: Detected lcore 12 as core 13 on socket 0 00:04:15.918 EAL: Detected lcore 13 as core 14 on socket 0 00:04:15.918 EAL: Detected lcore 14 as core 16 on socket 0 00:04:15.918 EAL: Detected lcore 15 as core 17 on socket 0 00:04:15.918 EAL: Detected lcore 16 as core 18 on socket 0 00:04:15.918 EAL: Detected lcore 17 as core 19 on socket 0 00:04:15.918 EAL: Detected lcore 18 as core 20 on socket 0 00:04:15.918 EAL: Detected lcore 19 as core 21 on socket 0 00:04:15.918 EAL: Detected lcore 20 as core 22 on socket 0 00:04:15.918 EAL: Detected lcore 21 as core 24 on socket 0 00:04:15.918 EAL: Detected lcore 22 as core 25 on socket 0 00:04:15.918 EAL: Detected lcore 23 as core 26 on socket 0 00:04:15.918 EAL: Detected lcore 24 as core 27 on socket 0 00:04:15.918 EAL: Detected lcore 25 as core 28 on socket 0 00:04:15.918 EAL: Detected lcore 26 as core 29 on socket 0 00:04:15.918 EAL: Detected lcore 27 as core 30 on socket 0 00:04:15.918 EAL: Detected lcore 28 as core 0 on socket 1 00:04:15.918 EAL: Detected lcore 29 as core 1 on socket 1 00:04:15.918 EAL: Detected lcore 30 as core 2 on socket 1 00:04:15.918 EAL: Detected lcore 31 as core 3 on socket 1 00:04:15.918 EAL: Detected lcore 32 as core 4 on socket 1 00:04:15.918 EAL: Detected lcore 33 as core 5 on socket 1 00:04:15.918 EAL: Detected lcore 34 as core 6 on socket 1 00:04:15.918 EAL: Detected lcore 35 as core 8 on socket 1 00:04:15.918 EAL: Detected lcore 36 as core 9 on socket 1 00:04:15.918 EAL: Detected lcore 37 as core 10 on socket 1 00:04:15.918 EAL: Detected lcore 38 as core 11 on socket 1 00:04:15.918 EAL: Detected lcore 39 as core 12 on socket 1 00:04:15.918 EAL: Detected lcore 40 as core 13 on socket 1 00:04:15.918 EAL: Detected lcore 41 as core 14 on socket 1 00:04:15.918 EAL: Detected lcore 42 as core 16 on socket 1 00:04:15.918 EAL: Detected lcore 43 as core 17 on socket 1 00:04:15.918 EAL: Detected lcore 44 as core 18 on socket 1 00:04:15.918 EAL: Detected lcore 45 as core 19 on socket 1 00:04:15.918 EAL: Detected lcore 46 as core 20 on socket 1 00:04:15.918 EAL: Detected lcore 47 as core 21 on socket 1 00:04:15.918 EAL: Detected lcore 48 as core 22 on socket 1 00:04:15.918 EAL: Detected lcore 49 as core 24 on socket 1 00:04:15.918 EAL: Detected lcore 50 as core 25 on socket 1 00:04:15.918 EAL: Detected lcore 51 as core 26 on socket 1 00:04:15.918 EAL: Detected lcore 52 as core 27 on socket 1 00:04:15.918 EAL: Detected lcore 53 as core 28 on socket 1 00:04:15.918 EAL: Detected lcore 54 as core 29 on socket 1 00:04:15.918 EAL: Detected lcore 55 as core 30 on socket 1 00:04:15.918 EAL: Detected lcore 56 as core 0 on socket 0 00:04:15.918 EAL: Detected lcore 57 as core 1 on socket 0 00:04:15.918 EAL: Detected lcore 58 as core 2 on socket 0 00:04:15.918 EAL: Detected lcore 59 as core 3 on socket 0 00:04:15.918 EAL: Detected lcore 60 as core 4 on socket 0 00:04:15.918 EAL: Detected lcore 61 as core 5 on socket 0 00:04:15.918 EAL: Detected lcore 62 as core 6 on socket 0 00:04:15.918 EAL: Detected lcore 63 as core 8 on socket 0 00:04:15.918 EAL: Detected lcore 64 as core 9 on socket 0 00:04:15.918 EAL: Detected lcore 65 as core 10 on socket 0 00:04:15.918 EAL: Detected lcore 66 as core 11 on socket 0 00:04:15.918 EAL: Detected lcore 67 as core 12 on socket 0 00:04:15.918 EAL: Detected lcore 68 as core 13 on socket 0 00:04:15.918 EAL: Detected lcore 69 as core 14 on socket 0 00:04:15.918 EAL: Detected lcore 70 as core 16 on socket 0 00:04:15.918 EAL: Detected lcore 71 as core 17 on socket 0 00:04:15.918 EAL: Detected lcore 72 as core 18 on socket 0 00:04:15.918 EAL: Detected lcore 73 as core 19 on socket 0 00:04:15.918 EAL: Detected lcore 74 as core 20 on socket 0 00:04:15.918 EAL: Detected lcore 75 as core 21 on socket 0 00:04:15.918 EAL: Detected lcore 76 as core 22 on socket 0 00:04:15.918 EAL: Detected lcore 77 as core 24 on socket 0 00:04:15.918 EAL: Detected lcore 78 as core 25 on socket 0 00:04:15.918 EAL: Detected lcore 79 as core 26 on socket 0 00:04:15.918 EAL: Detected lcore 80 as core 27 on socket 0 00:04:15.918 EAL: Detected lcore 81 as core 28 on socket 0 00:04:15.918 EAL: Detected lcore 82 as core 29 on socket 0 00:04:15.918 EAL: Detected lcore 83 as core 30 on socket 0 00:04:15.918 EAL: Detected lcore 84 as core 0 on socket 1 00:04:15.918 EAL: Detected lcore 85 as core 1 on socket 1 00:04:15.918 EAL: Detected lcore 86 as core 2 on socket 1 00:04:15.918 EAL: Detected lcore 87 as core 3 on socket 1 00:04:15.918 EAL: Detected lcore 88 as core 4 on socket 1 00:04:15.918 EAL: Detected lcore 89 as core 5 on socket 1 00:04:15.918 EAL: Detected lcore 90 as core 6 on socket 1 00:04:15.918 EAL: Detected lcore 91 as core 8 on socket 1 00:04:15.918 EAL: Detected lcore 92 as core 9 on socket 1 00:04:15.918 EAL: Detected lcore 93 as core 10 on socket 1 00:04:15.918 EAL: Detected lcore 94 as core 11 on socket 1 00:04:15.918 EAL: Detected lcore 95 as core 12 on socket 1 00:04:15.918 EAL: Detected lcore 96 as core 13 on socket 1 00:04:15.918 EAL: Detected lcore 97 as core 14 on socket 1 00:04:15.918 EAL: Detected lcore 98 as core 16 on socket 1 00:04:15.918 EAL: Detected lcore 99 as core 17 on socket 1 00:04:15.918 EAL: Detected lcore 100 as core 18 on socket 1 00:04:15.918 EAL: Detected lcore 101 as core 19 on socket 1 00:04:15.918 EAL: Detected lcore 102 as core 20 on socket 1 00:04:15.918 EAL: Detected lcore 103 as core 21 on socket 1 00:04:15.918 EAL: Detected lcore 104 as core 22 on socket 1 00:04:15.918 EAL: Detected lcore 105 as core 24 on socket 1 00:04:15.918 EAL: Detected lcore 106 as core 25 on socket 1 00:04:15.918 EAL: Detected lcore 107 as core 26 on socket 1 00:04:15.918 EAL: Detected lcore 108 as core 27 on socket 1 00:04:15.918 EAL: Detected lcore 109 as core 28 on socket 1 00:04:15.918 EAL: Detected lcore 110 as core 29 on socket 1 00:04:15.918 EAL: Detected lcore 111 as core 30 on socket 1 00:04:15.918 EAL: Maximum logical cores by configuration: 128 00:04:15.918 EAL: Detected CPU lcores: 112 00:04:15.918 EAL: Detected NUMA nodes: 2 00:04:15.918 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:15.918 EAL: Detected shared linkage of DPDK 00:04:15.918 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.918 EAL: Bus pci wants IOVA as 'DC' 00:04:15.918 EAL: Buses did not request a specific IOVA mode. 00:04:15.918 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.918 EAL: Selected IOVA mode 'VA' 00:04:15.918 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.918 EAL: Probing VFIO support... 00:04:15.918 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.918 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.918 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.918 EAL: VFIO support initialized 00:04:15.918 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.918 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.919 EAL: Setting up physically contiguous memory... 00:04:15.919 EAL: Setting maximum number of open files to 524288 00:04:15.919 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.919 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.919 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.919 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.919 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.919 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.919 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.919 EAL: Hugepages will be freed exactly as allocated. 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: TSC frequency is ~2500000 KHz 00:04:15.919 EAL: Main lcore 0 is ready (tid=7f14cddbba00;cpuset=[0]) 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 0 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.919 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.919 00:04:15.919 00:04:15.919 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.919 http://cunit.sourceforge.net/ 00:04:15.919 00:04:15.919 00:04:15.919 Suite: components_suite 00:04:15.919 Test: vtophys_malloc_test ...passed 00:04:15.919 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.919 EAL: Restoring previous memory policy: 4 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.919 EAL: request: mp_malloc_sync 00:04:15.919 EAL: No shared files mode enabled, IPC is disabled 00:04:15.919 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.919 EAL: Trying to obtain current memory policy. 00:04:15.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.178 EAL: Restoring previous memory policy: 4 00:04:16.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.178 EAL: request: mp_malloc_sync 00:04:16.178 EAL: No shared files mode enabled, IPC is disabled 00:04:16.178 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.178 EAL: request: mp_malloc_sync 00:04:16.178 EAL: No shared files mode enabled, IPC is disabled 00:04:16.178 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.178 EAL: Trying to obtain current memory policy. 00:04:16.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.178 EAL: Restoring previous memory policy: 4 00:04:16.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.178 EAL: request: mp_malloc_sync 00:04:16.179 EAL: No shared files mode enabled, IPC is disabled 00:04:16.179 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.437 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.437 EAL: request: mp_malloc_sync 00:04:16.437 EAL: No shared files mode enabled, IPC is disabled 00:04:16.437 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.437 EAL: Trying to obtain current memory policy. 00:04:16.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.698 EAL: Restoring previous memory policy: 4 00:04:16.698 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.698 EAL: request: mp_malloc_sync 00:04:16.698 EAL: No shared files mode enabled, IPC is disabled 00:04:16.698 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.698 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.004 EAL: request: mp_malloc_sync 00:04:17.004 EAL: No shared files mode enabled, IPC is disabled 00:04:17.004 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.004 passed 00:04:17.004 00:04:17.004 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.004 suites 1 1 n/a 0 0 00:04:17.004 tests 2 2 2 0 0 00:04:17.004 asserts 497 497 497 0 n/a 00:04:17.004 00:04:17.004 Elapsed time = 0.967 seconds 00:04:17.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.005 EAL: request: mp_malloc_sync 00:04:17.005 EAL: No shared files mode enabled, IPC is disabled 00:04:17.005 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.005 EAL: No shared files mode enabled, IPC is disabled 00:04:17.005 EAL: No shared files mode enabled, IPC is disabled 00:04:17.005 EAL: No shared files mode enabled, IPC is disabled 00:04:17.005 00:04:17.005 real 0m1.100s 00:04:17.005 user 0m0.634s 00:04:17.005 sys 0m0.434s 00:04:17.005 23:46:17 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.005 23:46:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.005 ************************************ 00:04:17.005 END TEST env_vtophys 00:04:17.005 ************************************ 00:04:17.005 23:46:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.005 23:46:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.005 23:46:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.005 23:46:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.005 ************************************ 00:04:17.005 START TEST env_pci 00:04:17.005 ************************************ 00:04:17.005 23:46:17 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:17.005 00:04:17.005 00:04:17.005 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.005 http://cunit.sourceforge.net/ 00:04:17.005 00:04:17.005 00:04:17.005 Suite: pci 00:04:17.005 Test: pci_hook ...[2024-05-14 23:46:17.486908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3393502 has claimed it 00:04:17.005 EAL: Cannot find device (10000:00:01.0) 00:04:17.005 EAL: Failed to attach device on primary process 00:04:17.005 passed 00:04:17.005 00:04:17.005 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.005 suites 1 1 n/a 0 0 00:04:17.005 tests 1 1 1 0 0 00:04:17.005 asserts 25 25 25 0 n/a 00:04:17.005 00:04:17.005 Elapsed time = 0.034 seconds 00:04:17.005 00:04:17.005 real 0m0.056s 00:04:17.005 user 0m0.018s 00:04:17.005 sys 0m0.038s 00:04:17.005 23:46:17 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.005 23:46:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.005 ************************************ 00:04:17.005 END TEST env_pci 00:04:17.005 ************************************ 00:04:17.005 23:46:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.005 23:46:17 env -- env/env.sh@15 -- # uname 00:04:17.005 23:46:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.005 23:46:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.005 23:46:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.005 23:46:17 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:17.005 23:46:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.005 23:46:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.265 ************************************ 00:04:17.265 START TEST env_dpdk_post_init 00:04:17.265 ************************************ 00:04:17.265 23:46:17 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.265 EAL: Detected CPU lcores: 112 00:04:17.265 EAL: Detected NUMA nodes: 2 00:04:17.265 EAL: Detected shared linkage of DPDK 00:04:17.265 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.265 EAL: Selected IOVA mode 'VA' 00:04:17.265 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.265 EAL: VFIO support initialized 00:04:17.265 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.265 EAL: Using IOMMU type 1 (Type 1) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:17.265 EAL: Ignore mapping IO port bar(1) 00:04:17.265 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:17.525 EAL: Ignore mapping IO port bar(1) 00:04:17.525 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:18.094 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:22.289 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:22.289 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:22.289 Starting DPDK initialization... 00:04:22.289 Starting SPDK post initialization... 00:04:22.289 SPDK NVMe probe 00:04:22.289 Attaching to 0000:d8:00.0 00:04:22.289 Attached to 0000:d8:00.0 00:04:22.289 Cleaning up... 00:04:22.289 00:04:22.289 real 0m4.936s 00:04:22.289 user 0m3.656s 00:04:22.289 sys 0m0.336s 00:04:22.289 23:46:22 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.289 23:46:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.289 ************************************ 00:04:22.289 END TEST env_dpdk_post_init 00:04:22.289 ************************************ 00:04:22.289 23:46:22 env -- env/env.sh@26 -- # uname 00:04:22.289 23:46:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.289 23:46:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.289 23:46:22 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.289 23:46:22 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.289 23:46:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.289 ************************************ 00:04:22.289 START TEST env_mem_callbacks 00:04:22.289 ************************************ 00:04:22.289 23:46:22 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.289 EAL: Detected CPU lcores: 112 00:04:22.289 EAL: Detected NUMA nodes: 2 00:04:22.289 EAL: Detected shared linkage of DPDK 00:04:22.289 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.290 EAL: Selected IOVA mode 'VA' 00:04:22.290 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.290 EAL: VFIO support initialized 00:04:22.290 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.290 00:04:22.290 00:04:22.290 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.290 http://cunit.sourceforge.net/ 00:04:22.290 00:04:22.290 00:04:22.290 Suite: memory 00:04:22.290 Test: test ... 00:04:22.290 register 0x200000200000 2097152 00:04:22.290 malloc 3145728 00:04:22.290 register 0x200000400000 4194304 00:04:22.290 buf 0x200000500000 len 3145728 PASSED 00:04:22.290 malloc 64 00:04:22.290 buf 0x2000004fff40 len 64 PASSED 00:04:22.290 malloc 4194304 00:04:22.290 register 0x200000800000 6291456 00:04:22.290 buf 0x200000a00000 len 4194304 PASSED 00:04:22.290 free 0x200000500000 3145728 00:04:22.290 free 0x2000004fff40 64 00:04:22.290 unregister 0x200000400000 4194304 PASSED 00:04:22.290 free 0x200000a00000 4194304 00:04:22.290 unregister 0x200000800000 6291456 PASSED 00:04:22.290 malloc 8388608 00:04:22.290 register 0x200000400000 10485760 00:04:22.290 buf 0x200000600000 len 8388608 PASSED 00:04:22.290 free 0x200000600000 8388608 00:04:22.290 unregister 0x200000400000 10485760 PASSED 00:04:22.290 passed 00:04:22.290 00:04:22.290 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.290 suites 1 1 n/a 0 0 00:04:22.290 tests 1 1 1 0 0 00:04:22.290 asserts 15 15 15 0 n/a 00:04:22.290 00:04:22.290 Elapsed time = 0.005 seconds 00:04:22.290 00:04:22.290 real 0m0.046s 00:04:22.290 user 0m0.012s 00:04:22.290 sys 0m0.034s 00:04:22.290 23:46:22 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.290 23:46:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 ************************************ 00:04:22.290 END TEST env_mem_callbacks 00:04:22.290 ************************************ 00:04:22.290 00:04:22.290 real 0m6.793s 00:04:22.290 user 0m4.621s 00:04:22.290 sys 0m1.204s 00:04:22.290 23:46:22 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.290 23:46:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 ************************************ 00:04:22.290 END TEST env 00:04:22.290 ************************************ 00:04:22.290 23:46:22 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.290 23:46:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.290 23:46:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.290 23:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 ************************************ 00:04:22.290 START TEST rpc 00:04:22.290 ************************************ 00:04:22.290 23:46:22 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.550 * Looking for test storage... 00:04:22.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.550 23:46:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3394476 00:04:22.550 23:46:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.550 23:46:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:22.550 23:46:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3394476 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@827 -- # '[' -z 3394476 ']' 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:22.550 23:46:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.550 [2024-05-14 23:46:22.960392] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:22.550 [2024-05-14 23:46:22.960439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394476 ] 00:04:22.550 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.550 [2024-05-14 23:46:23.029402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.550 [2024-05-14 23:46:23.098010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.550 [2024-05-14 23:46:23.098051] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3394476' to capture a snapshot of events at runtime. 00:04:22.550 [2024-05-14 23:46:23.098061] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.550 [2024-05-14 23:46:23.098069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.550 [2024-05-14 23:46:23.098093] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3394476 for offline analysis/debug. 00:04:22.550 [2024-05-14 23:46:23.098118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.486 23:46:23 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:23.486 23:46:23 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:23.486 23:46:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.486 23:46:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.486 23:46:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.486 23:46:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.486 23:46:23 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.486 23:46:23 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.486 23:46:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.486 ************************************ 00:04:23.486 START TEST rpc_integrity 00:04:23.486 ************************************ 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.486 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.486 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.487 { 00:04:23.487 "name": "Malloc0", 00:04:23.487 "aliases": [ 00:04:23.487 "67339b2c-f4ca-4edd-870f-7c21dec9e04b" 00:04:23.487 ], 00:04:23.487 "product_name": "Malloc disk", 00:04:23.487 "block_size": 512, 00:04:23.487 "num_blocks": 16384, 00:04:23.487 "uuid": "67339b2c-f4ca-4edd-870f-7c21dec9e04b", 00:04:23.487 "assigned_rate_limits": { 00:04:23.487 "rw_ios_per_sec": 0, 00:04:23.487 "rw_mbytes_per_sec": 0, 00:04:23.487 "r_mbytes_per_sec": 0, 00:04:23.487 "w_mbytes_per_sec": 0 00:04:23.487 }, 00:04:23.487 "claimed": false, 00:04:23.487 "zoned": false, 00:04:23.487 "supported_io_types": { 00:04:23.487 "read": true, 00:04:23.487 "write": true, 00:04:23.487 "unmap": true, 00:04:23.487 "write_zeroes": true, 00:04:23.487 "flush": true, 00:04:23.487 "reset": true, 00:04:23.487 "compare": false, 00:04:23.487 "compare_and_write": false, 00:04:23.487 "abort": true, 00:04:23.487 "nvme_admin": false, 00:04:23.487 "nvme_io": false 00:04:23.487 }, 00:04:23.487 "memory_domains": [ 00:04:23.487 { 00:04:23.487 "dma_device_id": "system", 00:04:23.487 "dma_device_type": 1 00:04:23.487 }, 00:04:23.487 { 00:04:23.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.487 "dma_device_type": 2 00:04:23.487 } 00:04:23.487 ], 00:04:23.487 "driver_specific": {} 00:04:23.487 } 00:04:23.487 ]' 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 [2024-05-14 23:46:23.902089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.487 [2024-05-14 23:46:23.902121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.487 [2024-05-14 23:46:23.902135] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15eb0d0 00:04:23.487 [2024-05-14 23:46:23.902143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.487 [2024-05-14 23:46:23.903214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.487 [2024-05-14 23:46:23.903237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.487 Passthru0 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.487 { 00:04:23.487 "name": "Malloc0", 00:04:23.487 "aliases": [ 00:04:23.487 "67339b2c-f4ca-4edd-870f-7c21dec9e04b" 00:04:23.487 ], 00:04:23.487 "product_name": "Malloc disk", 00:04:23.487 "block_size": 512, 00:04:23.487 "num_blocks": 16384, 00:04:23.487 "uuid": "67339b2c-f4ca-4edd-870f-7c21dec9e04b", 00:04:23.487 "assigned_rate_limits": { 00:04:23.487 "rw_ios_per_sec": 0, 00:04:23.487 "rw_mbytes_per_sec": 0, 00:04:23.487 "r_mbytes_per_sec": 0, 00:04:23.487 "w_mbytes_per_sec": 0 00:04:23.487 }, 00:04:23.487 "claimed": true, 00:04:23.487 "claim_type": "exclusive_write", 00:04:23.487 "zoned": false, 00:04:23.487 "supported_io_types": { 00:04:23.487 "read": true, 00:04:23.487 "write": true, 00:04:23.487 "unmap": true, 00:04:23.487 "write_zeroes": true, 00:04:23.487 "flush": true, 00:04:23.487 "reset": true, 00:04:23.487 "compare": false, 00:04:23.487 "compare_and_write": false, 00:04:23.487 "abort": true, 00:04:23.487 "nvme_admin": false, 00:04:23.487 "nvme_io": false 00:04:23.487 }, 00:04:23.487 "memory_domains": [ 00:04:23.487 { 00:04:23.487 "dma_device_id": "system", 00:04:23.487 "dma_device_type": 1 00:04:23.487 }, 00:04:23.487 { 00:04:23.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.487 "dma_device_type": 2 00:04:23.487 } 00:04:23.487 ], 00:04:23.487 "driver_specific": {} 00:04:23.487 }, 00:04:23.487 { 00:04:23.487 "name": "Passthru0", 00:04:23.487 "aliases": [ 00:04:23.487 "25ab6d2a-1a00-5154-8cba-cb24ee7cbd3e" 00:04:23.487 ], 00:04:23.487 "product_name": "passthru", 00:04:23.487 "block_size": 512, 00:04:23.487 "num_blocks": 16384, 00:04:23.487 "uuid": "25ab6d2a-1a00-5154-8cba-cb24ee7cbd3e", 00:04:23.487 "assigned_rate_limits": { 00:04:23.487 "rw_ios_per_sec": 0, 00:04:23.487 "rw_mbytes_per_sec": 0, 00:04:23.487 "r_mbytes_per_sec": 0, 00:04:23.487 "w_mbytes_per_sec": 0 00:04:23.487 }, 00:04:23.487 "claimed": false, 00:04:23.487 "zoned": false, 00:04:23.487 "supported_io_types": { 00:04:23.487 "read": true, 00:04:23.487 "write": true, 00:04:23.487 "unmap": true, 00:04:23.487 "write_zeroes": true, 00:04:23.487 "flush": true, 00:04:23.487 "reset": true, 00:04:23.487 "compare": false, 00:04:23.487 "compare_and_write": false, 00:04:23.487 "abort": true, 00:04:23.487 "nvme_admin": false, 00:04:23.487 "nvme_io": false 00:04:23.487 }, 00:04:23.487 "memory_domains": [ 00:04:23.487 { 00:04:23.487 "dma_device_id": "system", 00:04:23.487 "dma_device_type": 1 00:04:23.487 }, 00:04:23.487 { 00:04:23.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.487 "dma_device_type": 2 00:04:23.487 } 00:04:23.487 ], 00:04:23.487 "driver_specific": { 00:04:23.487 "passthru": { 00:04:23.487 "name": "Passthru0", 00:04:23.487 "base_bdev_name": "Malloc0" 00:04:23.487 } 00:04:23.487 } 00:04:23.487 } 00:04:23.487 ]' 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 23:46:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.487 23:46:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.487 23:46:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.487 00:04:23.487 real 0m0.238s 00:04:23.487 user 0m0.138s 00:04:23.487 sys 0m0.034s 00:04:23.487 23:46:24 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.487 23:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 ************************************ 00:04:23.487 END TEST rpc_integrity 00:04:23.487 ************************************ 00:04:23.487 23:46:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.487 23:46:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.487 23:46:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.487 23:46:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 ************************************ 00:04:23.747 START TEST rpc_plugins 00:04:23.747 ************************************ 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.747 { 00:04:23.747 "name": "Malloc1", 00:04:23.747 "aliases": [ 00:04:23.747 "93a79069-41f9-4221-b767-f7193ca24aa8" 00:04:23.747 ], 00:04:23.747 "product_name": "Malloc disk", 00:04:23.747 "block_size": 4096, 00:04:23.747 "num_blocks": 256, 00:04:23.747 "uuid": "93a79069-41f9-4221-b767-f7193ca24aa8", 00:04:23.747 "assigned_rate_limits": { 00:04:23.747 "rw_ios_per_sec": 0, 00:04:23.747 "rw_mbytes_per_sec": 0, 00:04:23.747 "r_mbytes_per_sec": 0, 00:04:23.747 "w_mbytes_per_sec": 0 00:04:23.747 }, 00:04:23.747 "claimed": false, 00:04:23.747 "zoned": false, 00:04:23.747 "supported_io_types": { 00:04:23.747 "read": true, 00:04:23.747 "write": true, 00:04:23.747 "unmap": true, 00:04:23.747 "write_zeroes": true, 00:04:23.747 "flush": true, 00:04:23.747 "reset": true, 00:04:23.747 "compare": false, 00:04:23.747 "compare_and_write": false, 00:04:23.747 "abort": true, 00:04:23.747 "nvme_admin": false, 00:04:23.747 "nvme_io": false 00:04:23.747 }, 00:04:23.747 "memory_domains": [ 00:04:23.747 { 00:04:23.747 "dma_device_id": "system", 00:04:23.747 "dma_device_type": 1 00:04:23.747 }, 00:04:23.747 { 00:04:23.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.747 "dma_device_type": 2 00:04:23.747 } 00:04:23.747 ], 00:04:23.747 "driver_specific": {} 00:04:23.747 } 00:04:23.747 ]' 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.747 23:46:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.747 00:04:23.747 real 0m0.132s 00:04:23.747 user 0m0.079s 00:04:23.747 sys 0m0.018s 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 ************************************ 00:04:23.747 END TEST rpc_plugins 00:04:23.747 ************************************ 00:04:23.747 23:46:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.747 23:46:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.747 23:46:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.747 23:46:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 ************************************ 00:04:23.747 START TEST rpc_trace_cmd_test 00:04:23.747 ************************************ 00:04:23.747 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:23.747 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.747 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.747 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.747 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.006 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.006 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.006 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3394476", 00:04:24.006 "tpoint_group_mask": "0x8", 00:04:24.006 "iscsi_conn": { 00:04:24.006 "mask": "0x2", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "scsi": { 00:04:24.006 "mask": "0x4", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "bdev": { 00:04:24.006 "mask": "0x8", 00:04:24.006 "tpoint_mask": "0xffffffffffffffff" 00:04:24.006 }, 00:04:24.006 "nvmf_rdma": { 00:04:24.006 "mask": "0x10", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "nvmf_tcp": { 00:04:24.006 "mask": "0x20", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "ftl": { 00:04:24.006 "mask": "0x40", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "blobfs": { 00:04:24.006 "mask": "0x80", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "dsa": { 00:04:24.006 "mask": "0x200", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "thread": { 00:04:24.006 "mask": "0x400", 00:04:24.006 "tpoint_mask": "0x0" 00:04:24.006 }, 00:04:24.006 "nvme_pcie": { 00:04:24.007 "mask": "0x800", 00:04:24.007 "tpoint_mask": "0x0" 00:04:24.007 }, 00:04:24.007 "iaa": { 00:04:24.007 "mask": "0x1000", 00:04:24.007 "tpoint_mask": "0x0" 00:04:24.007 }, 00:04:24.007 "nvme_tcp": { 00:04:24.007 "mask": "0x2000", 00:04:24.007 "tpoint_mask": "0x0" 00:04:24.007 }, 00:04:24.007 "bdev_nvme": { 00:04:24.007 "mask": "0x4000", 00:04:24.007 "tpoint_mask": "0x0" 00:04:24.007 }, 00:04:24.007 "sock": { 00:04:24.007 "mask": "0x8000", 00:04:24.007 "tpoint_mask": "0x0" 00:04:24.007 } 00:04:24.007 }' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.007 00:04:24.007 real 0m0.198s 00:04:24.007 user 0m0.155s 00:04:24.007 sys 0m0.037s 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.007 23:46:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.007 ************************************ 00:04:24.007 END TEST rpc_trace_cmd_test 00:04:24.007 ************************************ 00:04:24.007 23:46:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.007 23:46:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.007 23:46:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.007 23:46:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.007 23:46:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.007 23:46:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 ************************************ 00:04:24.267 START TEST rpc_daemon_integrity 00:04:24.267 ************************************ 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.267 { 00:04:24.267 "name": "Malloc2", 00:04:24.267 "aliases": [ 00:04:24.267 "a4e15a6b-9b66-44a3-9f14-f1131b17b958" 00:04:24.267 ], 00:04:24.267 "product_name": "Malloc disk", 00:04:24.267 "block_size": 512, 00:04:24.267 "num_blocks": 16384, 00:04:24.267 "uuid": "a4e15a6b-9b66-44a3-9f14-f1131b17b958", 00:04:24.267 "assigned_rate_limits": { 00:04:24.267 "rw_ios_per_sec": 0, 00:04:24.267 "rw_mbytes_per_sec": 0, 00:04:24.267 "r_mbytes_per_sec": 0, 00:04:24.267 "w_mbytes_per_sec": 0 00:04:24.267 }, 00:04:24.267 "claimed": false, 00:04:24.267 "zoned": false, 00:04:24.267 "supported_io_types": { 00:04:24.267 "read": true, 00:04:24.267 "write": true, 00:04:24.267 "unmap": true, 00:04:24.267 "write_zeroes": true, 00:04:24.267 "flush": true, 00:04:24.267 "reset": true, 00:04:24.267 "compare": false, 00:04:24.267 "compare_and_write": false, 00:04:24.267 "abort": true, 00:04:24.267 "nvme_admin": false, 00:04:24.267 "nvme_io": false 00:04:24.267 }, 00:04:24.267 "memory_domains": [ 00:04:24.267 { 00:04:24.267 "dma_device_id": "system", 00:04:24.267 "dma_device_type": 1 00:04:24.267 }, 00:04:24.267 { 00:04:24.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.267 "dma_device_type": 2 00:04:24.267 } 00:04:24.267 ], 00:04:24.267 "driver_specific": {} 00:04:24.267 } 00:04:24.267 ]' 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 [2024-05-14 23:46:24.716274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.267 [2024-05-14 23:46:24.716305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.267 [2024-05-14 23:46:24.716320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15eae80 00:04:24.267 [2024-05-14 23:46:24.716328] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.267 [2024-05-14 23:46:24.717237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.267 [2024-05-14 23:46:24.717258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.267 Passthru0 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.267 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.267 { 00:04:24.267 "name": "Malloc2", 00:04:24.267 "aliases": [ 00:04:24.267 "a4e15a6b-9b66-44a3-9f14-f1131b17b958" 00:04:24.267 ], 00:04:24.267 "product_name": "Malloc disk", 00:04:24.267 "block_size": 512, 00:04:24.267 "num_blocks": 16384, 00:04:24.267 "uuid": "a4e15a6b-9b66-44a3-9f14-f1131b17b958", 00:04:24.267 "assigned_rate_limits": { 00:04:24.267 "rw_ios_per_sec": 0, 00:04:24.267 "rw_mbytes_per_sec": 0, 00:04:24.267 "r_mbytes_per_sec": 0, 00:04:24.267 "w_mbytes_per_sec": 0 00:04:24.267 }, 00:04:24.267 "claimed": true, 00:04:24.267 "claim_type": "exclusive_write", 00:04:24.267 "zoned": false, 00:04:24.267 "supported_io_types": { 00:04:24.267 "read": true, 00:04:24.267 "write": true, 00:04:24.267 "unmap": true, 00:04:24.267 "write_zeroes": true, 00:04:24.267 "flush": true, 00:04:24.267 "reset": true, 00:04:24.267 "compare": false, 00:04:24.267 "compare_and_write": false, 00:04:24.267 "abort": true, 00:04:24.267 "nvme_admin": false, 00:04:24.267 "nvme_io": false 00:04:24.267 }, 00:04:24.267 "memory_domains": [ 00:04:24.267 { 00:04:24.267 "dma_device_id": "system", 00:04:24.267 "dma_device_type": 1 00:04:24.267 }, 00:04:24.267 { 00:04:24.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.268 "dma_device_type": 2 00:04:24.268 } 00:04:24.268 ], 00:04:24.268 "driver_specific": {} 00:04:24.268 }, 00:04:24.268 { 00:04:24.268 "name": "Passthru0", 00:04:24.268 "aliases": [ 00:04:24.268 "354d466a-3da9-5632-8438-9a5aa3b8e838" 00:04:24.268 ], 00:04:24.268 "product_name": "passthru", 00:04:24.268 "block_size": 512, 00:04:24.268 "num_blocks": 16384, 00:04:24.268 "uuid": "354d466a-3da9-5632-8438-9a5aa3b8e838", 00:04:24.268 "assigned_rate_limits": { 00:04:24.268 "rw_ios_per_sec": 0, 00:04:24.268 "rw_mbytes_per_sec": 0, 00:04:24.268 "r_mbytes_per_sec": 0, 00:04:24.268 "w_mbytes_per_sec": 0 00:04:24.268 }, 00:04:24.268 "claimed": false, 00:04:24.268 "zoned": false, 00:04:24.268 "supported_io_types": { 00:04:24.268 "read": true, 00:04:24.268 "write": true, 00:04:24.268 "unmap": true, 00:04:24.268 "write_zeroes": true, 00:04:24.268 "flush": true, 00:04:24.268 "reset": true, 00:04:24.268 "compare": false, 00:04:24.268 "compare_and_write": false, 00:04:24.268 "abort": true, 00:04:24.268 "nvme_admin": false, 00:04:24.268 "nvme_io": false 00:04:24.268 }, 00:04:24.268 "memory_domains": [ 00:04:24.268 { 00:04:24.268 "dma_device_id": "system", 00:04:24.268 "dma_device_type": 1 00:04:24.268 }, 00:04:24.268 { 00:04:24.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.268 "dma_device_type": 2 00:04:24.268 } 00:04:24.268 ], 00:04:24.268 "driver_specific": { 00:04:24.268 "passthru": { 00:04:24.268 "name": "Passthru0", 00:04:24.268 "base_bdev_name": "Malloc2" 00:04:24.268 } 00:04:24.268 } 00:04:24.268 } 00:04:24.268 ]' 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.268 00:04:24.268 real 0m0.232s 00:04:24.268 user 0m0.125s 00:04:24.268 sys 0m0.046s 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.268 23:46:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.268 ************************************ 00:04:24.268 END TEST rpc_daemon_integrity 00:04:24.268 ************************************ 00:04:24.527 23:46:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.527 23:46:24 rpc -- rpc/rpc.sh@84 -- # killprocess 3394476 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@946 -- # '[' -z 3394476 ']' 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@950 -- # kill -0 3394476 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@951 -- # uname 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3394476 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:24.527 23:46:24 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3394476' 00:04:24.528 killing process with pid 3394476 00:04:24.528 23:46:24 rpc -- common/autotest_common.sh@965 -- # kill 3394476 00:04:24.528 23:46:24 rpc -- common/autotest_common.sh@970 -- # wait 3394476 00:04:24.787 00:04:24.787 real 0m2.443s 00:04:24.787 user 0m3.006s 00:04:24.787 sys 0m0.757s 00:04:24.787 23:46:25 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.787 23:46:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.787 ************************************ 00:04:24.787 END TEST rpc 00:04:24.787 ************************************ 00:04:24.787 23:46:25 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.787 23:46:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.787 23:46:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.787 23:46:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.787 ************************************ 00:04:24.787 START TEST skip_rpc 00:04:24.787 ************************************ 00:04:24.787 23:46:25 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.046 * Looking for test storage... 00:04:25.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.046 23:46:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.046 23:46:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.046 23:46:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.046 23:46:25 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.046 23:46:25 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.046 23:46:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.046 ************************************ 00:04:25.046 START TEST skip_rpc 00:04:25.046 ************************************ 00:04:25.046 23:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:25.046 23:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3395175 00:04:25.046 23:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.046 23:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.046 23:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.046 [2024-05-14 23:46:25.496721] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:25.046 [2024-05-14 23:46:25.496766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3395175 ] 00:04:25.046 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.046 [2024-05-14 23:46:25.564509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.046 [2024-05-14 23:46:25.633806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3395175 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3395175 ']' 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3395175 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3395175 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3395175' 00:04:30.320 killing process with pid 3395175 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3395175 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3395175 00:04:30.320 00:04:30.320 real 0m5.402s 00:04:30.320 user 0m5.147s 00:04:30.320 sys 0m0.295s 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:30.320 23:46:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 ************************************ 00:04:30.320 END TEST skip_rpc 00:04:30.320 ************************************ 00:04:30.320 23:46:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.320 23:46:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:30.320 23:46:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.320 23:46:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.580 ************************************ 00:04:30.580 START TEST skip_rpc_with_json 00:04:30.580 ************************************ 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3396023 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3396023 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3396023 ']' 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:30.580 23:46:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.580 [2024-05-14 23:46:30.990553] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:30.580 [2024-05-14 23:46:30.990602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396023 ] 00:04:30.580 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.580 [2024-05-14 23:46:31.059204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.580 [2024-05-14 23:46:31.125978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.521 [2024-05-14 23:46:31.794644] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.521 request: 00:04:31.521 { 00:04:31.521 "trtype": "tcp", 00:04:31.521 "method": "nvmf_get_transports", 00:04:31.521 "req_id": 1 00:04:31.521 } 00:04:31.521 Got JSON-RPC error response 00:04:31.521 response: 00:04:31.521 { 00:04:31.521 "code": -19, 00:04:31.521 "message": "No such device" 00:04:31.521 } 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.521 [2024-05-14 23:46:31.806740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.521 { 00:04:31.521 "subsystems": [ 00:04:31.521 { 00:04:31.521 "subsystem": "vfio_user_target", 00:04:31.521 "config": null 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "keyring", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "iobuf", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "iobuf_set_options", 00:04:31.521 "params": { 00:04:31.521 "small_pool_count": 8192, 00:04:31.521 "large_pool_count": 1024, 00:04:31.521 "small_bufsize": 8192, 00:04:31.521 "large_bufsize": 135168 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "sock", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "sock_impl_set_options", 00:04:31.521 "params": { 00:04:31.521 "impl_name": "posix", 00:04:31.521 "recv_buf_size": 2097152, 00:04:31.521 "send_buf_size": 2097152, 00:04:31.521 "enable_recv_pipe": true, 00:04:31.521 "enable_quickack": false, 00:04:31.521 "enable_placement_id": 0, 00:04:31.521 "enable_zerocopy_send_server": true, 00:04:31.521 "enable_zerocopy_send_client": false, 00:04:31.521 "zerocopy_threshold": 0, 00:04:31.521 "tls_version": 0, 00:04:31.521 "enable_ktls": false 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "sock_impl_set_options", 00:04:31.521 "params": { 00:04:31.521 "impl_name": "ssl", 00:04:31.521 "recv_buf_size": 4096, 00:04:31.521 "send_buf_size": 4096, 00:04:31.521 "enable_recv_pipe": true, 00:04:31.521 "enable_quickack": false, 00:04:31.521 "enable_placement_id": 0, 00:04:31.521 "enable_zerocopy_send_server": true, 00:04:31.521 "enable_zerocopy_send_client": false, 00:04:31.521 "zerocopy_threshold": 0, 00:04:31.521 "tls_version": 0, 00:04:31.521 "enable_ktls": false 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "vmd", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "accel", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "accel_set_options", 00:04:31.521 "params": { 00:04:31.521 "small_cache_size": 128, 00:04:31.521 "large_cache_size": 16, 00:04:31.521 "task_count": 2048, 00:04:31.521 "sequence_count": 2048, 00:04:31.521 "buf_count": 2048 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "bdev", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "bdev_set_options", 00:04:31.521 "params": { 00:04:31.521 "bdev_io_pool_size": 65535, 00:04:31.521 "bdev_io_cache_size": 256, 00:04:31.521 "bdev_auto_examine": true, 00:04:31.521 "iobuf_small_cache_size": 128, 00:04:31.521 "iobuf_large_cache_size": 16 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "bdev_raid_set_options", 00:04:31.521 "params": { 00:04:31.521 "process_window_size_kb": 1024 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "bdev_iscsi_set_options", 00:04:31.521 "params": { 00:04:31.521 "timeout_sec": 30 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "bdev_nvme_set_options", 00:04:31.521 "params": { 00:04:31.521 "action_on_timeout": "none", 00:04:31.521 "timeout_us": 0, 00:04:31.521 "timeout_admin_us": 0, 00:04:31.521 "keep_alive_timeout_ms": 10000, 00:04:31.521 "arbitration_burst": 0, 00:04:31.521 "low_priority_weight": 0, 00:04:31.521 "medium_priority_weight": 0, 00:04:31.521 "high_priority_weight": 0, 00:04:31.521 "nvme_adminq_poll_period_us": 10000, 00:04:31.521 "nvme_ioq_poll_period_us": 0, 00:04:31.521 "io_queue_requests": 0, 00:04:31.521 "delay_cmd_submit": true, 00:04:31.521 "transport_retry_count": 4, 00:04:31.521 "bdev_retry_count": 3, 00:04:31.521 "transport_ack_timeout": 0, 00:04:31.521 "ctrlr_loss_timeout_sec": 0, 00:04:31.521 "reconnect_delay_sec": 0, 00:04:31.521 "fast_io_fail_timeout_sec": 0, 00:04:31.521 "disable_auto_failback": false, 00:04:31.521 "generate_uuids": false, 00:04:31.521 "transport_tos": 0, 00:04:31.521 "nvme_error_stat": false, 00:04:31.521 "rdma_srq_size": 0, 00:04:31.521 "io_path_stat": false, 00:04:31.521 "allow_accel_sequence": false, 00:04:31.521 "rdma_max_cq_size": 0, 00:04:31.521 "rdma_cm_event_timeout_ms": 0, 00:04:31.521 "dhchap_digests": [ 00:04:31.521 "sha256", 00:04:31.521 "sha384", 00:04:31.521 "sha512" 00:04:31.521 ], 00:04:31.521 "dhchap_dhgroups": [ 00:04:31.521 "null", 00:04:31.521 "ffdhe2048", 00:04:31.521 "ffdhe3072", 00:04:31.521 "ffdhe4096", 00:04:31.521 "ffdhe6144", 00:04:31.521 "ffdhe8192" 00:04:31.521 ] 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "bdev_nvme_set_hotplug", 00:04:31.521 "params": { 00:04:31.521 "period_us": 100000, 00:04:31.521 "enable": false 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "bdev_wait_for_examine" 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "scsi", 00:04:31.521 "config": null 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "scheduler", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "framework_set_scheduler", 00:04:31.521 "params": { 00:04:31.521 "name": "static" 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "vhost_scsi", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "vhost_blk", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "ublk", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "nbd", 00:04:31.521 "config": [] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "nvmf", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "nvmf_set_config", 00:04:31.521 "params": { 00:04:31.521 "discovery_filter": "match_any", 00:04:31.521 "admin_cmd_passthru": { 00:04:31.521 "identify_ctrlr": false 00:04:31.521 } 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "nvmf_set_max_subsystems", 00:04:31.521 "params": { 00:04:31.521 "max_subsystems": 1024 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "nvmf_set_crdt", 00:04:31.521 "params": { 00:04:31.521 "crdt1": 0, 00:04:31.521 "crdt2": 0, 00:04:31.521 "crdt3": 0 00:04:31.521 } 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "method": "nvmf_create_transport", 00:04:31.521 "params": { 00:04:31.521 "trtype": "TCP", 00:04:31.521 "max_queue_depth": 128, 00:04:31.521 "max_io_qpairs_per_ctrlr": 127, 00:04:31.521 "in_capsule_data_size": 4096, 00:04:31.521 "max_io_size": 131072, 00:04:31.521 "io_unit_size": 131072, 00:04:31.521 "max_aq_depth": 128, 00:04:31.521 "num_shared_buffers": 511, 00:04:31.521 "buf_cache_size": 4294967295, 00:04:31.521 "dif_insert_or_strip": false, 00:04:31.521 "zcopy": false, 00:04:31.521 "c2h_success": true, 00:04:31.521 "sock_priority": 0, 00:04:31.521 "abort_timeout_sec": 1, 00:04:31.521 "ack_timeout": 0, 00:04:31.521 "data_wr_pool_size": 0 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 }, 00:04:31.521 { 00:04:31.521 "subsystem": "iscsi", 00:04:31.521 "config": [ 00:04:31.521 { 00:04:31.521 "method": "iscsi_set_options", 00:04:31.521 "params": { 00:04:31.521 "node_base": "iqn.2016-06.io.spdk", 00:04:31.521 "max_sessions": 128, 00:04:31.521 "max_connections_per_session": 2, 00:04:31.521 "max_queue_depth": 64, 00:04:31.521 "default_time2wait": 2, 00:04:31.521 "default_time2retain": 20, 00:04:31.521 "first_burst_length": 8192, 00:04:31.521 "immediate_data": true, 00:04:31.521 "allow_duplicated_isid": false, 00:04:31.521 "error_recovery_level": 0, 00:04:31.521 "nop_timeout": 60, 00:04:31.521 "nop_in_interval": 30, 00:04:31.521 "disable_chap": false, 00:04:31.521 "require_chap": false, 00:04:31.521 "mutual_chap": false, 00:04:31.521 "chap_group": 0, 00:04:31.521 "max_large_datain_per_connection": 64, 00:04:31.521 "max_r2t_per_connection": 4, 00:04:31.521 "pdu_pool_size": 36864, 00:04:31.521 "immediate_data_pool_size": 16384, 00:04:31.521 "data_out_pool_size": 2048 00:04:31.521 } 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 } 00:04:31.521 ] 00:04:31.521 } 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3396023 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3396023 ']' 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3396023 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:31.521 23:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3396023 00:04:31.521 23:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:31.521 23:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:31.521 23:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3396023' 00:04:31.521 killing process with pid 3396023 00:04:31.521 23:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3396023 00:04:31.521 23:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3396023 00:04:31.779 23:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3396287 00:04:31.779 23:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.779 23:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3396287 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3396287 ']' 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3396287 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3396287 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3396287' 00:04:37.049 killing process with pid 3396287 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3396287 00:04:37.049 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3396287 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.310 00:04:37.310 real 0m6.829s 00:04:37.310 user 0m6.644s 00:04:37.310 sys 0m0.643s 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.310 ************************************ 00:04:37.310 END TEST skip_rpc_with_json 00:04:37.310 ************************************ 00:04:37.310 23:46:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.310 23:46:37 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.310 23:46:37 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.310 23:46:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.310 ************************************ 00:04:37.310 START TEST skip_rpc_with_delay 00:04:37.310 ************************************ 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.310 [2024-05-14 23:46:37.877413] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.310 [2024-05-14 23:46:37.877475] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:37.310 00:04:37.310 real 0m0.049s 00:04:37.310 user 0m0.028s 00:04:37.310 sys 0m0.021s 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.310 23:46:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.310 ************************************ 00:04:37.310 END TEST skip_rpc_with_delay 00:04:37.310 ************************************ 00:04:37.572 23:46:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.572 23:46:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.572 23:46:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.572 23:46:37 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.572 23:46:37 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.572 23:46:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.572 ************************************ 00:04:37.572 START TEST exit_on_failed_rpc_init 00:04:37.572 ************************************ 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3397396 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3397396 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3397396 ']' 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:37.572 23:46:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.572 [2024-05-14 23:46:38.033176] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:37.572 [2024-05-14 23:46:38.033225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397396 ] 00:04:37.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.572 [2024-05-14 23:46:38.100530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.875 [2024-05-14 23:46:38.179302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:38.499 23:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.499 [2024-05-14 23:46:38.883307] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:38.499 [2024-05-14 23:46:38.883357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397489 ] 00:04:38.499 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.499 [2024-05-14 23:46:38.950601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.499 [2024-05-14 23:46:39.019005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.499 [2024-05-14 23:46:39.019085] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:38.500 [2024-05-14 23:46:39.019097] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:38.500 [2024-05-14 23:46:39.019105] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3397396 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3397396 ']' 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3397396 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3397396 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3397396' 00:04:38.758 killing process with pid 3397396 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3397396 00:04:38.758 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3397396 00:04:39.018 00:04:39.018 real 0m1.508s 00:04:39.018 user 0m1.707s 00:04:39.018 sys 0m0.453s 00:04:39.018 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.018 23:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.018 ************************************ 00:04:39.018 END TEST exit_on_failed_rpc_init 00:04:39.018 ************************************ 00:04:39.018 23:46:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.018 00:04:39.018 real 0m14.214s 00:04:39.018 user 0m13.675s 00:04:39.018 sys 0m1.704s 00:04:39.018 23:46:39 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.018 23:46:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.018 ************************************ 00:04:39.018 END TEST skip_rpc 00:04:39.018 ************************************ 00:04:39.018 23:46:39 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.018 23:46:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.018 23:46:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.018 23:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:39.278 ************************************ 00:04:39.278 START TEST rpc_client 00:04:39.278 ************************************ 00:04:39.278 23:46:39 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.278 * Looking for test storage... 00:04:39.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:39.278 23:46:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:39.278 OK 00:04:39.278 23:46:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:39.278 00:04:39.278 real 0m0.137s 00:04:39.278 user 0m0.063s 00:04:39.278 sys 0m0.084s 00:04:39.278 23:46:39 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.278 23:46:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:39.278 ************************************ 00:04:39.278 END TEST rpc_client 00:04:39.278 ************************************ 00:04:39.278 23:46:39 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.278 23:46:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.278 23:46:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.278 23:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:39.278 ************************************ 00:04:39.278 START TEST json_config 00:04:39.278 ************************************ 00:04:39.278 23:46:39 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.538 23:46:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.538 23:46:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.538 23:46:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.538 23:46:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.538 23:46:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.538 23:46:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.538 23:46:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.538 23:46:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.538 23:46:39 json_config -- paths/export.sh@5 -- # export PATH 00:04:39.538 23:46:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@47 -- # : 0 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.539 23:46:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:39.539 INFO: JSON configuration test init 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.539 23:46:39 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:39.539 23:46:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.539 23:46:39 json_config -- json_config/common.sh@10 -- # shift 00:04:39.539 23:46:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.539 23:46:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.539 23:46:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.539 23:46:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.539 23:46:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.539 23:46:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3397794 00:04:39.539 23:46:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.539 Waiting for target to run... 00:04:39.539 23:46:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3397794 /var/tmp/spdk_tgt.sock 00:04:39.539 23:46:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@827 -- # '[' -z 3397794 ']' 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.539 23:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.539 [2024-05-14 23:46:40.024961] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:39.539 [2024-05-14 23:46:40.025032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397794 ] 00:04:39.539 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.107 [2024-05-14 23:46:40.458966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.107 [2024-05-14 23:46:40.542924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.368 23:46:40 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.369 23:46:40 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:40.369 23:46:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:40.369 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:40.369 23:46:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:40.369 23:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:40.369 23:46:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.369 23:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:40.369 23:46:40 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:40.369 23:46:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:43.660 23:46:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:43.660 23:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:43.660 23:46:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.660 23:46:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:43.660 23:46:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.660 23:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:43.660 23:46:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:43.660 23:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:43.660 23:46:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.660 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.919 MallocForNvmf0 00:04:43.919 23:46:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.919 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.919 MallocForNvmf1 00:04:43.919 23:46:44 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.919 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.178 [2024-05-14 23:46:44.642710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.178 23:46:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.178 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.437 23:46:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.437 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.437 23:46:44 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.437 23:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.696 23:46:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.696 23:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.955 [2024-05-14 23:46:45.320505] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:44.955 [2024-05-14 23:46:45.320914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.955 23:46:45 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:44.955 23:46:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.955 23:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.955 23:46:45 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:44.955 23:46:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.955 23:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.955 23:46:45 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:44.955 23:46:45 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.955 23:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.215 MallocBdevForConfigChangeCheck 00:04:45.215 23:46:45 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:45.215 23:46:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.215 23:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 23:46:45 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:45.215 23:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.475 23:46:45 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:45.475 INFO: shutting down applications... 00:04:45.475 23:46:45 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:45.475 23:46:45 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:45.475 23:46:45 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:45.475 23:46:45 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.013 Calling clear_iscsi_subsystem 00:04:48.013 Calling clear_nvmf_subsystem 00:04:48.013 Calling clear_nbd_subsystem 00:04:48.013 Calling clear_ublk_subsystem 00:04:48.013 Calling clear_vhost_blk_subsystem 00:04:48.013 Calling clear_vhost_scsi_subsystem 00:04:48.013 Calling clear_bdev_subsystem 00:04:48.013 23:46:47 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:48.013 23:46:47 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:48.013 23:46:47 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@345 -- # break 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:48.013 23:46:48 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:48.013 23:46:48 json_config -- json_config/common.sh@31 -- # local app=target 00:04:48.013 23:46:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.013 23:46:48 json_config -- json_config/common.sh@35 -- # [[ -n 3397794 ]] 00:04:48.013 23:46:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3397794 00:04:48.013 [2024-05-14 23:46:48.311767] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:48.013 23:46:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.013 23:46:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.013 23:46:48 json_config -- json_config/common.sh@41 -- # kill -0 3397794 00:04:48.013 23:46:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.274 23:46:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.274 23:46:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.274 23:46:48 json_config -- json_config/common.sh@41 -- # kill -0 3397794 00:04:48.274 23:46:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.274 23:46:48 json_config -- json_config/common.sh@43 -- # break 00:04:48.274 23:46:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.274 23:46:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.274 SPDK target shutdown done 00:04:48.274 23:46:48 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:48.274 INFO: relaunching applications... 00:04:48.274 23:46:48 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.274 23:46:48 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.274 23:46:48 json_config -- json_config/common.sh@10 -- # shift 00:04:48.274 23:46:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.274 23:46:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.274 23:46:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.274 23:46:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.274 23:46:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.274 23:46:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3399516 00:04:48.274 23:46:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.274 Waiting for target to run... 00:04:48.274 23:46:48 json_config -- json_config/common.sh@25 -- # waitforlisten 3399516 /var/tmp/spdk_tgt.sock 00:04:48.274 23:46:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@827 -- # '[' -z 3399516 ']' 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:48.274 23:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.534 [2024-05-14 23:46:48.875132] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:48.534 [2024-05-14 23:46:48.875210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399516 ] 00:04:48.534 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.793 [2024-05-14 23:46:49.317602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.052 [2024-05-14 23:46:49.403748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.345 [2024-05-14 23:46:52.424668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.345 [2024-05-14 23:46:52.456676] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:52.345 [2024-05-14 23:46:52.457047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:52.604 23:46:53 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:52.604 23:46:53 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:52.604 23:46:53 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.604 00:04:52.604 23:46:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:52.604 23:46:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:52.604 INFO: Checking if target configuration is the same... 00:04:52.604 23:46:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:52.604 23:46:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.604 23:46:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.604 + '[' 2 -ne 2 ']' 00:04:52.604 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:52.604 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:52.604 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.604 +++ basename /dev/fd/62 00:04:52.604 ++ mktemp /tmp/62.XXX 00:04:52.604 + tmp_file_1=/tmp/62.bBT 00:04:52.604 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.604 + tmp_file_2=/tmp/spdk_tgt_config.json.9XS 00:04:52.604 + ret=0 00:04:52.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.864 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.864 + diff -u /tmp/62.bBT /tmp/spdk_tgt_config.json.9XS 00:04:52.864 + echo 'INFO: JSON config files are the same' 00:04:52.864 INFO: JSON config files are the same 00:04:52.864 + rm /tmp/62.bBT /tmp/spdk_tgt_config.json.9XS 00:04:52.864 + exit 0 00:04:52.864 23:46:53 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:52.864 23:46:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:52.864 INFO: changing configuration and checking if this can be detected... 00:04:52.864 23:46:53 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.864 23:46:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:53.123 23:46:53 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.123 23:46:53 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:53.123 23:46:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.123 + '[' 2 -ne 2 ']' 00:04:53.123 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:53.123 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:53.123 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.123 +++ basename /dev/fd/62 00:04:53.123 ++ mktemp /tmp/62.XXX 00:04:53.123 + tmp_file_1=/tmp/62.hax 00:04:53.123 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.123 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.123 + tmp_file_2=/tmp/spdk_tgt_config.json.Ei2 00:04:53.123 + ret=0 00:04:53.123 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.381 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:53.381 + diff -u /tmp/62.hax /tmp/spdk_tgt_config.json.Ei2 00:04:53.381 + ret=1 00:04:53.381 + echo '=== Start of file: /tmp/62.hax ===' 00:04:53.381 + cat /tmp/62.hax 00:04:53.381 + echo '=== End of file: /tmp/62.hax ===' 00:04:53.381 + echo '' 00:04:53.381 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ei2 ===' 00:04:53.381 + cat /tmp/spdk_tgt_config.json.Ei2 00:04:53.381 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ei2 ===' 00:04:53.381 + echo '' 00:04:53.381 + rm /tmp/62.hax /tmp/spdk_tgt_config.json.Ei2 00:04:53.381 + exit 1 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:53.381 INFO: configuration change detected. 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 3399516 ]] 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.381 23:46:53 json_config -- json_config/json_config.sh@323 -- # killprocess 3399516 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@946 -- # '[' -z 3399516 ']' 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@950 -- # kill -0 3399516 00:04:53.381 23:46:53 json_config -- common/autotest_common.sh@951 -- # uname 00:04:53.641 23:46:53 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:53.641 23:46:53 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3399516 00:04:53.641 23:46:54 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:53.641 23:46:54 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:53.641 23:46:54 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3399516' 00:04:53.641 killing process with pid 3399516 00:04:53.641 23:46:54 json_config -- common/autotest_common.sh@965 -- # kill 3399516 00:04:53.641 [2024-05-14 23:46:54.026538] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:53.641 23:46:54 json_config -- common/autotest_common.sh@970 -- # wait 3399516 00:04:55.585 23:46:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.585 23:46:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:55.585 23:46:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.585 23:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.585 23:46:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:55.585 23:46:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:55.585 INFO: Success 00:04:55.585 00:04:55.585 real 0m16.252s 00:04:55.585 user 0m16.612s 00:04:55.585 sys 0m2.329s 00:04:55.585 23:46:56 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.585 23:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.585 ************************************ 00:04:55.585 END TEST json_config 00:04:55.585 ************************************ 00:04:55.585 23:46:56 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.585 23:46:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.585 23:46:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.585 23:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.845 ************************************ 00:04:55.845 START TEST json_config_extra_key 00:04:55.845 ************************************ 00:04:55.845 23:46:56 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.845 23:46:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.845 23:46:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.845 23:46:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.845 23:46:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.845 23:46:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.845 23:46:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.845 23:46:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.845 23:46:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.845 23:46:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.845 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.846 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:55.846 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.846 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.846 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.846 INFO: launching applications... 00:04:55.846 23:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3400958 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.846 Waiting for target to run... 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3400958 /var/tmp/spdk_tgt.sock 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3400958 ']' 00:04:55.846 23:46:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.846 23:46:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.846 [2024-05-14 23:46:56.340253] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:55.846 [2024-05-14 23:46:56.340300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400958 ] 00:04:55.846 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.415 [2024-05-14 23:46:56.776234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.415 [2024-05-14 23:46:56.862315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.674 23:46:57 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:56.674 23:46:57 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.674 00:04:56.674 23:46:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.674 INFO: shutting down applications... 00:04:56.674 23:46:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3400958 ]] 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3400958 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3400958 00:04:56.674 23:46:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3400958 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.242 23:46:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.242 SPDK target shutdown done 00:04:57.242 23:46:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.242 Success 00:04:57.242 00:04:57.242 real 0m1.452s 00:04:57.242 user 0m1.057s 00:04:57.242 sys 0m0.569s 00:04:57.242 23:46:57 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.242 23:46:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.242 ************************************ 00:04:57.242 END TEST json_config_extra_key 00:04:57.242 ************************************ 00:04:57.242 23:46:57 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.242 23:46:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.242 23:46:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.242 23:46:57 -- common/autotest_common.sh@10 -- # set +x 00:04:57.242 ************************************ 00:04:57.242 START TEST alias_rpc 00:04:57.242 ************************************ 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.242 * Looking for test storage... 00:04:57.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:57.242 23:46:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.242 23:46:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3401279 00:04:57.242 23:46:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.242 23:46:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3401279 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3401279 ']' 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.242 23:46:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.502 [2024-05-14 23:46:57.867700] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:57.502 [2024-05-14 23:46:57.867746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401279 ] 00:04:57.502 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.502 [2024-05-14 23:46:57.935442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.502 [2024-05-14 23:46:58.004641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.070 23:46:58 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.070 23:46:58 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:58.070 23:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.329 23:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3401279 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3401279 ']' 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3401279 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401279 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401279' 00:04:58.330 killing process with pid 3401279 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@965 -- # kill 3401279 00:04:58.330 23:46:58 alias_rpc -- common/autotest_common.sh@970 -- # wait 3401279 00:04:58.937 00:04:58.937 real 0m1.533s 00:04:58.937 user 0m1.619s 00:04:58.937 sys 0m0.454s 00:04:58.937 23:46:59 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.937 23:46:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.937 ************************************ 00:04:58.937 END TEST alias_rpc 00:04:58.937 ************************************ 00:04:58.937 23:46:59 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:58.937 23:46:59 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.937 23:46:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.937 23:46:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.937 23:46:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.937 ************************************ 00:04:58.937 START TEST spdkcli_tcp 00:04:58.937 ************************************ 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.937 * Looking for test storage... 00:04:58.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3401599 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3401599 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3401599 ']' 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.937 23:46:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.937 23:46:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.937 [2024-05-14 23:46:59.485013] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:04:58.937 [2024-05-14 23:46:59.485069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401599 ] 00:04:58.937 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.197 [2024-05-14 23:46:59.554789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.197 [2024-05-14 23:46:59.628471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.197 [2024-05-14 23:46:59.628474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.764 23:47:00 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.764 23:47:00 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:59.764 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3401660 00:04:59.764 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.764 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.023 [ 00:05:00.023 "bdev_malloc_delete", 00:05:00.023 "bdev_malloc_create", 00:05:00.023 "bdev_null_resize", 00:05:00.023 "bdev_null_delete", 00:05:00.023 "bdev_null_create", 00:05:00.023 "bdev_nvme_cuse_unregister", 00:05:00.023 "bdev_nvme_cuse_register", 00:05:00.023 "bdev_opal_new_user", 00:05:00.023 "bdev_opal_set_lock_state", 00:05:00.023 "bdev_opal_delete", 00:05:00.023 "bdev_opal_get_info", 00:05:00.023 "bdev_opal_create", 00:05:00.023 "bdev_nvme_opal_revert", 00:05:00.023 "bdev_nvme_opal_init", 00:05:00.023 "bdev_nvme_send_cmd", 00:05:00.023 "bdev_nvme_get_path_iostat", 00:05:00.023 "bdev_nvme_get_mdns_discovery_info", 00:05:00.023 "bdev_nvme_stop_mdns_discovery", 00:05:00.023 "bdev_nvme_start_mdns_discovery", 00:05:00.023 "bdev_nvme_set_multipath_policy", 00:05:00.023 "bdev_nvme_set_preferred_path", 00:05:00.023 "bdev_nvme_get_io_paths", 00:05:00.023 "bdev_nvme_remove_error_injection", 00:05:00.023 "bdev_nvme_add_error_injection", 00:05:00.023 "bdev_nvme_get_discovery_info", 00:05:00.023 "bdev_nvme_stop_discovery", 00:05:00.023 "bdev_nvme_start_discovery", 00:05:00.023 "bdev_nvme_get_controller_health_info", 00:05:00.023 "bdev_nvme_disable_controller", 00:05:00.023 "bdev_nvme_enable_controller", 00:05:00.023 "bdev_nvme_reset_controller", 00:05:00.024 "bdev_nvme_get_transport_statistics", 00:05:00.024 "bdev_nvme_apply_firmware", 00:05:00.024 "bdev_nvme_detach_controller", 00:05:00.024 "bdev_nvme_get_controllers", 00:05:00.024 "bdev_nvme_attach_controller", 00:05:00.024 "bdev_nvme_set_hotplug", 00:05:00.024 "bdev_nvme_set_options", 00:05:00.024 "bdev_passthru_delete", 00:05:00.024 "bdev_passthru_create", 00:05:00.024 "bdev_lvol_check_shallow_copy", 00:05:00.024 "bdev_lvol_start_shallow_copy", 00:05:00.024 "bdev_lvol_grow_lvstore", 00:05:00.024 "bdev_lvol_get_lvols", 00:05:00.024 "bdev_lvol_get_lvstores", 00:05:00.024 "bdev_lvol_delete", 00:05:00.024 "bdev_lvol_set_read_only", 00:05:00.024 "bdev_lvol_resize", 00:05:00.024 "bdev_lvol_decouple_parent", 00:05:00.024 "bdev_lvol_inflate", 00:05:00.024 "bdev_lvol_rename", 00:05:00.024 "bdev_lvol_clone_bdev", 00:05:00.024 "bdev_lvol_clone", 00:05:00.024 "bdev_lvol_snapshot", 00:05:00.024 "bdev_lvol_create", 00:05:00.024 "bdev_lvol_delete_lvstore", 00:05:00.024 "bdev_lvol_rename_lvstore", 00:05:00.024 "bdev_lvol_create_lvstore", 00:05:00.024 "bdev_raid_set_options", 00:05:00.024 "bdev_raid_remove_base_bdev", 00:05:00.024 "bdev_raid_add_base_bdev", 00:05:00.024 "bdev_raid_delete", 00:05:00.024 "bdev_raid_create", 00:05:00.024 "bdev_raid_get_bdevs", 00:05:00.024 "bdev_error_inject_error", 00:05:00.024 "bdev_error_delete", 00:05:00.024 "bdev_error_create", 00:05:00.024 "bdev_split_delete", 00:05:00.024 "bdev_split_create", 00:05:00.024 "bdev_delay_delete", 00:05:00.024 "bdev_delay_create", 00:05:00.024 "bdev_delay_update_latency", 00:05:00.024 "bdev_zone_block_delete", 00:05:00.024 "bdev_zone_block_create", 00:05:00.024 "blobfs_create", 00:05:00.024 "blobfs_detect", 00:05:00.024 "blobfs_set_cache_size", 00:05:00.024 "bdev_aio_delete", 00:05:00.024 "bdev_aio_rescan", 00:05:00.024 "bdev_aio_create", 00:05:00.024 "bdev_ftl_set_property", 00:05:00.024 "bdev_ftl_get_properties", 00:05:00.024 "bdev_ftl_get_stats", 00:05:00.024 "bdev_ftl_unmap", 00:05:00.024 "bdev_ftl_unload", 00:05:00.024 "bdev_ftl_delete", 00:05:00.024 "bdev_ftl_load", 00:05:00.024 "bdev_ftl_create", 00:05:00.024 "bdev_virtio_attach_controller", 00:05:00.024 "bdev_virtio_scsi_get_devices", 00:05:00.024 "bdev_virtio_detach_controller", 00:05:00.024 "bdev_virtio_blk_set_hotplug", 00:05:00.024 "bdev_iscsi_delete", 00:05:00.024 "bdev_iscsi_create", 00:05:00.024 "bdev_iscsi_set_options", 00:05:00.024 "accel_error_inject_error", 00:05:00.024 "ioat_scan_accel_module", 00:05:00.024 "dsa_scan_accel_module", 00:05:00.024 "iaa_scan_accel_module", 00:05:00.024 "vfu_virtio_create_scsi_endpoint", 00:05:00.024 "vfu_virtio_scsi_remove_target", 00:05:00.024 "vfu_virtio_scsi_add_target", 00:05:00.024 "vfu_virtio_create_blk_endpoint", 00:05:00.024 "vfu_virtio_delete_endpoint", 00:05:00.024 "keyring_file_remove_key", 00:05:00.024 "keyring_file_add_key", 00:05:00.024 "iscsi_get_histogram", 00:05:00.024 "iscsi_enable_histogram", 00:05:00.024 "iscsi_set_options", 00:05:00.024 "iscsi_get_auth_groups", 00:05:00.024 "iscsi_auth_group_remove_secret", 00:05:00.024 "iscsi_auth_group_add_secret", 00:05:00.024 "iscsi_delete_auth_group", 00:05:00.024 "iscsi_create_auth_group", 00:05:00.024 "iscsi_set_discovery_auth", 00:05:00.024 "iscsi_get_options", 00:05:00.024 "iscsi_target_node_request_logout", 00:05:00.024 "iscsi_target_node_set_redirect", 00:05:00.024 "iscsi_target_node_set_auth", 00:05:00.024 "iscsi_target_node_add_lun", 00:05:00.024 "iscsi_get_stats", 00:05:00.024 "iscsi_get_connections", 00:05:00.024 "iscsi_portal_group_set_auth", 00:05:00.024 "iscsi_start_portal_group", 00:05:00.024 "iscsi_delete_portal_group", 00:05:00.024 "iscsi_create_portal_group", 00:05:00.024 "iscsi_get_portal_groups", 00:05:00.024 "iscsi_delete_target_node", 00:05:00.024 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.024 "iscsi_target_node_add_pg_ig_maps", 00:05:00.024 "iscsi_create_target_node", 00:05:00.024 "iscsi_get_target_nodes", 00:05:00.024 "iscsi_delete_initiator_group", 00:05:00.024 "iscsi_initiator_group_remove_initiators", 00:05:00.024 "iscsi_initiator_group_add_initiators", 00:05:00.024 "iscsi_create_initiator_group", 00:05:00.024 "iscsi_get_initiator_groups", 00:05:00.024 "nvmf_set_crdt", 00:05:00.024 "nvmf_set_config", 00:05:00.024 "nvmf_set_max_subsystems", 00:05:00.024 "nvmf_subsystem_get_listeners", 00:05:00.024 "nvmf_subsystem_get_qpairs", 00:05:00.024 "nvmf_subsystem_get_controllers", 00:05:00.024 "nvmf_get_stats", 00:05:00.024 "nvmf_get_transports", 00:05:00.024 "nvmf_create_transport", 00:05:00.024 "nvmf_get_targets", 00:05:00.024 "nvmf_delete_target", 00:05:00.024 "nvmf_create_target", 00:05:00.024 "nvmf_subsystem_allow_any_host", 00:05:00.024 "nvmf_subsystem_remove_host", 00:05:00.024 "nvmf_subsystem_add_host", 00:05:00.024 "nvmf_ns_remove_host", 00:05:00.024 "nvmf_ns_add_host", 00:05:00.024 "nvmf_subsystem_remove_ns", 00:05:00.024 "nvmf_subsystem_add_ns", 00:05:00.024 "nvmf_subsystem_listener_set_ana_state", 00:05:00.024 "nvmf_discovery_get_referrals", 00:05:00.024 "nvmf_discovery_remove_referral", 00:05:00.024 "nvmf_discovery_add_referral", 00:05:00.024 "nvmf_subsystem_remove_listener", 00:05:00.024 "nvmf_subsystem_add_listener", 00:05:00.024 "nvmf_delete_subsystem", 00:05:00.024 "nvmf_create_subsystem", 00:05:00.024 "nvmf_get_subsystems", 00:05:00.024 "env_dpdk_get_mem_stats", 00:05:00.024 "nbd_get_disks", 00:05:00.024 "nbd_stop_disk", 00:05:00.024 "nbd_start_disk", 00:05:00.024 "ublk_recover_disk", 00:05:00.024 "ublk_get_disks", 00:05:00.024 "ublk_stop_disk", 00:05:00.024 "ublk_start_disk", 00:05:00.024 "ublk_destroy_target", 00:05:00.024 "ublk_create_target", 00:05:00.024 "virtio_blk_create_transport", 00:05:00.024 "virtio_blk_get_transports", 00:05:00.024 "vhost_controller_set_coalescing", 00:05:00.024 "vhost_get_controllers", 00:05:00.024 "vhost_delete_controller", 00:05:00.024 "vhost_create_blk_controller", 00:05:00.024 "vhost_scsi_controller_remove_target", 00:05:00.024 "vhost_scsi_controller_add_target", 00:05:00.024 "vhost_start_scsi_controller", 00:05:00.024 "vhost_create_scsi_controller", 00:05:00.024 "thread_set_cpumask", 00:05:00.024 "framework_get_scheduler", 00:05:00.024 "framework_set_scheduler", 00:05:00.024 "framework_get_reactors", 00:05:00.024 "thread_get_io_channels", 00:05:00.024 "thread_get_pollers", 00:05:00.024 "thread_get_stats", 00:05:00.024 "framework_monitor_context_switch", 00:05:00.024 "spdk_kill_instance", 00:05:00.024 "log_enable_timestamps", 00:05:00.024 "log_get_flags", 00:05:00.024 "log_clear_flag", 00:05:00.024 "log_set_flag", 00:05:00.024 "log_get_level", 00:05:00.024 "log_set_level", 00:05:00.024 "log_get_print_level", 00:05:00.024 "log_set_print_level", 00:05:00.024 "framework_enable_cpumask_locks", 00:05:00.024 "framework_disable_cpumask_locks", 00:05:00.024 "framework_wait_init", 00:05:00.024 "framework_start_init", 00:05:00.024 "scsi_get_devices", 00:05:00.024 "bdev_get_histogram", 00:05:00.024 "bdev_enable_histogram", 00:05:00.024 "bdev_set_qos_limit", 00:05:00.024 "bdev_set_qd_sampling_period", 00:05:00.024 "bdev_get_bdevs", 00:05:00.024 "bdev_reset_iostat", 00:05:00.024 "bdev_get_iostat", 00:05:00.024 "bdev_examine", 00:05:00.024 "bdev_wait_for_examine", 00:05:00.024 "bdev_set_options", 00:05:00.024 "notify_get_notifications", 00:05:00.024 "notify_get_types", 00:05:00.024 "accel_get_stats", 00:05:00.024 "accel_set_options", 00:05:00.024 "accel_set_driver", 00:05:00.024 "accel_crypto_key_destroy", 00:05:00.024 "accel_crypto_keys_get", 00:05:00.024 "accel_crypto_key_create", 00:05:00.024 "accel_assign_opc", 00:05:00.024 "accel_get_module_info", 00:05:00.024 "accel_get_opc_assignments", 00:05:00.024 "vmd_rescan", 00:05:00.024 "vmd_remove_device", 00:05:00.024 "vmd_enable", 00:05:00.024 "sock_get_default_impl", 00:05:00.024 "sock_set_default_impl", 00:05:00.024 "sock_impl_set_options", 00:05:00.024 "sock_impl_get_options", 00:05:00.024 "iobuf_get_stats", 00:05:00.024 "iobuf_set_options", 00:05:00.024 "keyring_get_keys", 00:05:00.024 "framework_get_pci_devices", 00:05:00.024 "framework_get_config", 00:05:00.024 "framework_get_subsystems", 00:05:00.024 "vfu_tgt_set_base_path", 00:05:00.024 "trace_get_info", 00:05:00.024 "trace_get_tpoint_group_mask", 00:05:00.024 "trace_disable_tpoint_group", 00:05:00.024 "trace_enable_tpoint_group", 00:05:00.024 "trace_clear_tpoint_mask", 00:05:00.024 "trace_set_tpoint_mask", 00:05:00.024 "spdk_get_version", 00:05:00.024 "rpc_get_methods" 00:05:00.024 ] 00:05:00.024 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.024 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.024 23:47:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3401599 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3401599 ']' 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3401599 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401599 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401599' 00:05:00.024 killing process with pid 3401599 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3401599 00:05:00.024 23:47:00 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3401599 00:05:00.593 00:05:00.593 real 0m1.571s 00:05:00.593 user 0m2.847s 00:05:00.593 sys 0m0.515s 00:05:00.593 23:47:00 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.593 23:47:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 END TEST spdkcli_tcp 00:05:00.593 ************************************ 00:05:00.593 23:47:00 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.593 23:47:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.593 23:47:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.593 23:47:00 -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 START TEST dpdk_mem_utility 00:05:00.593 ************************************ 00:05:00.593 23:47:00 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.593 * Looking for test storage... 00:05:00.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:00.593 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.593 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3401938 00:05:00.593 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3401938 00:05:00.593 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3401938 ']' 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.593 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 [2024-05-14 23:47:01.139201] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:00.593 [2024-05-14 23:47:01.139245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401938 ] 00:05:00.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.852 [2024-05-14 23:47:01.207121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.852 [2024-05-14 23:47:01.277567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.421 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.421 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:01.421 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.421 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.421 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.421 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.421 { 00:05:01.421 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.421 } 00:05:01.421 23:47:01 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.421 23:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.421 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:01.421 1 heaps totaling size 814.000000 MiB 00:05:01.421 size: 814.000000 MiB heap id: 0 00:05:01.421 end heaps---------- 00:05:01.421 8 mempools totaling size 598.116089 MiB 00:05:01.421 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.421 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.421 size: 84.521057 MiB name: bdev_io_3401938 00:05:01.421 size: 51.011292 MiB name: evtpool_3401938 00:05:01.421 size: 50.003479 MiB name: msgpool_3401938 00:05:01.421 size: 21.763794 MiB name: PDU_Pool 00:05:01.421 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.421 size: 0.026123 MiB name: Session_Pool 00:05:01.421 end mempools------- 00:05:01.421 6 memzones totaling size 4.142822 MiB 00:05:01.421 size: 1.000366 MiB name: RG_ring_0_3401938 00:05:01.421 size: 1.000366 MiB name: RG_ring_1_3401938 00:05:01.421 size: 1.000366 MiB name: RG_ring_4_3401938 00:05:01.421 size: 1.000366 MiB name: RG_ring_5_3401938 00:05:01.421 size: 0.125366 MiB name: RG_ring_2_3401938 00:05:01.421 size: 0.015991 MiB name: RG_ring_3_3401938 00:05:01.421 end memzones------- 00:05:01.421 23:47:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.680 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:01.680 list of free elements. size: 12.519348 MiB 00:05:01.680 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:01.680 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:01.680 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:01.680 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:01.680 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:01.680 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:01.680 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:01.680 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:01.680 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:01.680 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:01.680 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:01.680 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:01.680 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:01.680 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:01.680 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:01.680 list of standard malloc elements. size: 199.218079 MiB 00:05:01.680 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:01.680 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:01.680 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:01.680 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:01.680 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:01.680 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:01.680 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:01.680 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:01.680 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:01.680 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:01.681 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:01.681 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:01.681 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:01.681 list of memzone associated elements. size: 602.262573 MiB 00:05:01.681 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:01.681 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.681 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:01.681 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.681 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:01.681 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3401938_0 00:05:01.681 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:01.681 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3401938_0 00:05:01.681 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:01.681 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3401938_0 00:05:01.681 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:01.681 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.681 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:01.681 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.681 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:01.681 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3401938 00:05:01.681 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:01.681 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3401938 00:05:01.681 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:01.681 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3401938 00:05:01.681 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:01.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.681 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:01.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.681 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:01.681 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.681 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:01.681 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.681 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:01.681 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3401938 00:05:01.681 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:01.681 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3401938 00:05:01.681 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:01.681 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3401938 00:05:01.681 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:01.681 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3401938 00:05:01.681 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:01.681 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3401938 00:05:01.681 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:01.681 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.681 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:01.681 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.681 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:01.681 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.681 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:01.681 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3401938 00:05:01.681 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:01.681 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.681 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:01.681 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.681 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:01.681 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3401938 00:05:01.681 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:01.681 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.681 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:01.681 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3401938 00:05:01.681 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:01.681 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3401938 00:05:01.681 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:01.681 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.681 23:47:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.681 23:47:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3401938 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3401938 ']' 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3401938 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401938 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401938' 00:05:01.681 killing process with pid 3401938 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3401938 00:05:01.681 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3401938 00:05:01.941 00:05:01.941 real 0m1.442s 00:05:01.941 user 0m1.494s 00:05:01.941 sys 0m0.426s 00:05:01.941 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.941 23:47:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.941 ************************************ 00:05:01.941 END TEST dpdk_mem_utility 00:05:01.941 ************************************ 00:05:01.941 23:47:02 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:01.941 23:47:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.941 23:47:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.941 23:47:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.941 ************************************ 00:05:01.941 START TEST event 00:05:01.941 ************************************ 00:05:01.941 23:47:02 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.200 * Looking for test storage... 00:05:02.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:02.200 23:47:02 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.200 23:47:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.200 23:47:02 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.200 23:47:02 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:02.200 23:47:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.200 23:47:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.200 ************************************ 00:05:02.200 START TEST event_perf 00:05:02.200 ************************************ 00:05:02.200 23:47:02 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.200 Running I/O for 1 seconds...[2024-05-14 23:47:02.678455] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:02.200 [2024-05-14 23:47:02.678534] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402269 ] 00:05:02.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.200 [2024-05-14 23:47:02.749703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.459 [2024-05-14 23:47:02.823338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.459 [2024-05-14 23:47:02.823435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.459 [2024-05-14 23:47:02.823522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.459 [2024-05-14 23:47:02.823525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.396 Running I/O for 1 seconds... 00:05:03.396 lcore 0: 202577 00:05:03.396 lcore 1: 202576 00:05:03.396 lcore 2: 202577 00:05:03.396 lcore 3: 202576 00:05:03.396 done. 00:05:03.396 00:05:03.396 real 0m1.252s 00:05:03.396 user 0m4.154s 00:05:03.396 sys 0m0.090s 00:05:03.396 23:47:03 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.396 23:47:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.396 ************************************ 00:05:03.396 END TEST event_perf 00:05:03.396 ************************************ 00:05:03.396 23:47:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.396 23:47:03 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:03.396 23:47:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.396 23:47:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.655 ************************************ 00:05:03.655 START TEST event_reactor 00:05:03.655 ************************************ 00:05:03.655 23:47:03 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.655 [2024-05-14 23:47:04.019340] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:03.655 [2024-05-14 23:47:04.019417] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402554 ] 00:05:03.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.655 [2024-05-14 23:47:04.092562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.655 [2024-05-14 23:47:04.159364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.036 test_start 00:05:05.036 oneshot 00:05:05.036 tick 100 00:05:05.036 tick 100 00:05:05.036 tick 250 00:05:05.036 tick 100 00:05:05.036 tick 100 00:05:05.036 tick 100 00:05:05.036 tick 250 00:05:05.036 tick 500 00:05:05.036 tick 100 00:05:05.036 tick 100 00:05:05.036 tick 250 00:05:05.036 tick 100 00:05:05.036 tick 100 00:05:05.036 test_end 00:05:05.036 00:05:05.036 real 0m1.244s 00:05:05.036 user 0m1.147s 00:05:05.036 sys 0m0.093s 00:05:05.036 23:47:05 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.036 23:47:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.036 ************************************ 00:05:05.036 END TEST event_reactor 00:05:05.036 ************************************ 00:05:05.036 23:47:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.036 23:47:05 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:05.036 23:47:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.036 23:47:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.036 ************************************ 00:05:05.036 START TEST event_reactor_perf 00:05:05.036 ************************************ 00:05:05.036 23:47:05 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.036 [2024-05-14 23:47:05.341253] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:05.036 [2024-05-14 23:47:05.341333] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402834 ] 00:05:05.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.036 [2024-05-14 23:47:05.412433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.036 [2024-05-14 23:47:05.480084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.973 test_start 00:05:05.973 test_end 00:05:05.973 Performance: 524515 events per second 00:05:05.973 00:05:05.973 real 0m1.244s 00:05:05.973 user 0m1.161s 00:05:05.973 sys 0m0.080s 00:05:05.973 23:47:06 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.973 23:47:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.973 ************************************ 00:05:05.973 END TEST event_reactor_perf 00:05:05.973 ************************************ 00:05:06.234 23:47:06 event -- event/event.sh@49 -- # uname -s 00:05:06.234 23:47:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.234 23:47:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.234 23:47:06 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.234 23:47:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.234 23:47:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.234 ************************************ 00:05:06.234 START TEST event_scheduler 00:05:06.234 ************************************ 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.234 * Looking for test storage... 00:05:06.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:06.234 23:47:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.234 23:47:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3403092 00:05:06.234 23:47:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.234 23:47:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.234 23:47:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3403092 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3403092 ']' 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.234 23:47:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.234 [2024-05-14 23:47:06.799764] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:06.234 [2024-05-14 23:47:06.799812] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403092 ] 00:05:06.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.493 [2024-05-14 23:47:06.866422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.493 [2024-05-14 23:47:06.938944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.493 [2024-05-14 23:47:06.939027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.493 [2024-05-14 23:47:06.939114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.493 [2024-05-14 23:47:06.939116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.062 23:47:07 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.062 23:47:07 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:07.062 23:47:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.062 23:47:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.062 23:47:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.062 POWER: Env isn't set yet! 00:05:07.062 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:07.062 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.062 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.062 POWER: Attempting to initialise PSTAT power management... 00:05:07.062 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:07.062 POWER: Initialized successfully for lcore 0 power management 00:05:07.062 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:07.062 POWER: Initialized successfully for lcore 1 power management 00:05:07.323 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:07.323 POWER: Initialized successfully for lcore 2 power management 00:05:07.323 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:07.323 POWER: Initialized successfully for lcore 3 power management 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 [2024-05-14 23:47:07.735824] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 ************************************ 00:05:07.323 START TEST scheduler_create_thread 00:05:07.323 ************************************ 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 2 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 3 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 4 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 5 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 6 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 7 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 8 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 9 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 10 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.323 23:47:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.261 23:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.261 23:47:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:08.261 23:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.261 23:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.679 23:47:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.679 23:47:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.679 23:47:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.679 23:47:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.679 23:47:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 23:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.617 00:05:10.617 real 0m3.382s 00:05:10.617 user 0m0.024s 00:05:10.617 sys 0m0.006s 00:05:10.617 23:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.617 23:47:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 ************************************ 00:05:10.617 END TEST scheduler_create_thread 00:05:10.617 ************************************ 00:05:10.617 23:47:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.617 23:47:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3403092 00:05:10.617 23:47:11 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3403092 ']' 00:05:10.617 23:47:11 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3403092 00:05:10.617 23:47:11 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3403092 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3403092' 00:05:10.877 killing process with pid 3403092 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3403092 00:05:10.877 23:47:11 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3403092 00:05:11.137 [2024-05-14 23:47:11.544109] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.137 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:11.137 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:11.137 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:11.137 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:11.137 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:11.137 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:11.137 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:11.137 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:11.396 00:05:11.396 real 0m5.148s 00:05:11.396 user 0m10.580s 00:05:11.396 sys 0m0.418s 00:05:11.396 23:47:11 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.396 23:47:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 ************************************ 00:05:11.396 END TEST event_scheduler 00:05:11.396 ************************************ 00:05:11.396 23:47:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.396 23:47:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.396 23:47:11 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.396 23:47:11 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.396 23:47:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 ************************************ 00:05:11.396 START TEST app_repeat 00:05:11.396 ************************************ 00:05:11.396 23:47:11 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:11.396 23:47:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.396 23:47:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.396 23:47:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.396 23:47:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3404014 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3404014' 00:05:11.397 Process app_repeat pid: 3404014 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.397 spdk_app_start Round 0 00:05:11.397 23:47:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3404014 /var/tmp/spdk-nbd.sock 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3404014 ']' 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.397 23:47:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 [2024-05-14 23:47:11.925456] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:11.397 [2024-05-14 23:47:11.925520] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404014 ] 00:05:11.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.656 [2024-05-14 23:47:11.994578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.656 [2024-05-14 23:47:12.071762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.656 [2024-05-14 23:47:12.071765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.225 23:47:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.225 23:47:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:12.225 23:47:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.484 Malloc0 00:05:12.484 23:47:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.743 Malloc1 00:05:12.743 23:47:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.743 /dev/nbd0 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.743 1+0 records in 00:05:12.743 1+0 records out 00:05:12.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021554 s, 19.0 MB/s 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:12.743 23:47:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:12.743 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.744 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.744 23:47:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.002 /dev/nbd1 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.002 1+0 records in 00:05:13.002 1+0 records out 00:05:13.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223137 s, 18.4 MB/s 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:13.002 23:47:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.002 23:47:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.262 { 00:05:13.262 "nbd_device": "/dev/nbd0", 00:05:13.262 "bdev_name": "Malloc0" 00:05:13.262 }, 00:05:13.262 { 00:05:13.262 "nbd_device": "/dev/nbd1", 00:05:13.262 "bdev_name": "Malloc1" 00:05:13.262 } 00:05:13.262 ]' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.262 { 00:05:13.262 "nbd_device": "/dev/nbd0", 00:05:13.262 "bdev_name": "Malloc0" 00:05:13.262 }, 00:05:13.262 { 00:05:13.262 "nbd_device": "/dev/nbd1", 00:05:13.262 "bdev_name": "Malloc1" 00:05:13.262 } 00:05:13.262 ]' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.262 /dev/nbd1' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.262 /dev/nbd1' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.262 256+0 records in 00:05:13.262 256+0 records out 00:05:13.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114925 s, 91.2 MB/s 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.262 256+0 records in 00:05:13.262 256+0 records out 00:05:13.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166594 s, 62.9 MB/s 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.262 256+0 records in 00:05:13.262 256+0 records out 00:05:13.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213516 s, 49.1 MB/s 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.262 23:47:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.522 23:47:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.781 23:47:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.040 23:47:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.040 23:47:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.300 23:47:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.300 [2024-05-14 23:47:14.850158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.559 [2024-05-14 23:47:14.914610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.559 [2024-05-14 23:47:14.914612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.559 [2024-05-14 23:47:14.956293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.559 [2024-05-14 23:47:14.956337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.096 23:47:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.096 23:47:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.096 spdk_app_start Round 1 00:05:17.096 23:47:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3404014 /var/tmp/spdk-nbd.sock 00:05:17.096 23:47:17 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3404014 ']' 00:05:17.096 23:47:17 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.096 23:47:17 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.097 23:47:17 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.097 23:47:17 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.097 23:47:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.356 23:47:17 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.356 23:47:17 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:17.356 23:47:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.615 Malloc0 00:05:17.615 23:47:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.615 Malloc1 00:05:17.615 23:47:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.615 23:47:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.875 /dev/nbd0 00:05:17.875 23:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.875 23:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.875 1+0 records in 00:05:17.875 1+0 records out 00:05:17.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227848 s, 18.0 MB/s 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:17.875 23:47:18 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:17.875 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.875 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.875 23:47:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.134 /dev/nbd1 00:05:18.134 23:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.134 23:47:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.134 23:47:18 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.135 1+0 records in 00:05:18.135 1+0 records out 00:05:18.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247295 s, 16.6 MB/s 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:18.135 23:47:18 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:18.135 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.135 23:47:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.135 23:47:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.135 23:47:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.135 23:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.394 { 00:05:18.394 "nbd_device": "/dev/nbd0", 00:05:18.394 "bdev_name": "Malloc0" 00:05:18.394 }, 00:05:18.394 { 00:05:18.394 "nbd_device": "/dev/nbd1", 00:05:18.394 "bdev_name": "Malloc1" 00:05:18.394 } 00:05:18.394 ]' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.394 { 00:05:18.394 "nbd_device": "/dev/nbd0", 00:05:18.394 "bdev_name": "Malloc0" 00:05:18.394 }, 00:05:18.394 { 00:05:18.394 "nbd_device": "/dev/nbd1", 00:05:18.394 "bdev_name": "Malloc1" 00:05:18.394 } 00:05:18.394 ]' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.394 /dev/nbd1' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.394 /dev/nbd1' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.394 256+0 records in 00:05:18.394 256+0 records out 00:05:18.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114092 s, 91.9 MB/s 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.394 256+0 records in 00:05:18.394 256+0 records out 00:05:18.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202151 s, 51.9 MB/s 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.394 256+0 records in 00:05:18.394 256+0 records out 00:05:18.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189857 s, 55.2 MB/s 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.394 23:47:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.395 23:47:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.395 23:47:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.654 23:47:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.914 23:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.173 23:47:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.173 23:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.173 23:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.174 23:47:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.174 23:47:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.174 23:47:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.433 [2024-05-14 23:47:19.911497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.433 [2024-05-14 23:47:19.974271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.433 [2024-05-14 23:47:19.974273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.433 [2024-05-14 23:47:20.017155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.433 [2024-05-14 23:47:20.017212] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.725 23:47:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.725 23:47:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.725 spdk_app_start Round 2 00:05:22.725 23:47:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3404014 /var/tmp/spdk-nbd.sock 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3404014 ']' 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.725 23:47:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:22.725 23:47:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.725 Malloc0 00:05:22.725 23:47:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.725 Malloc1 00:05:22.725 23:47:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.725 23:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.994 /dev/nbd0 00:05:22.994 23:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.994 23:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.994 1+0 records in 00:05:22.994 1+0 records out 00:05:22.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235184 s, 17.4 MB/s 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:22.994 23:47:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:22.994 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.994 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.994 23:47:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.994 /dev/nbd1 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.256 1+0 records in 00:05:23.256 1+0 records out 00:05:23.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263364 s, 15.6 MB/s 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:23.256 23:47:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.256 { 00:05:23.256 "nbd_device": "/dev/nbd0", 00:05:23.256 "bdev_name": "Malloc0" 00:05:23.256 }, 00:05:23.256 { 00:05:23.256 "nbd_device": "/dev/nbd1", 00:05:23.256 "bdev_name": "Malloc1" 00:05:23.256 } 00:05:23.256 ]' 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.256 { 00:05:23.256 "nbd_device": "/dev/nbd0", 00:05:23.256 "bdev_name": "Malloc0" 00:05:23.256 }, 00:05:23.256 { 00:05:23.256 "nbd_device": "/dev/nbd1", 00:05:23.256 "bdev_name": "Malloc1" 00:05:23.256 } 00:05:23.256 ]' 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.256 /dev/nbd1' 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.256 /dev/nbd1' 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.256 23:47:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.515 256+0 records in 00:05:23.515 256+0 records out 00:05:23.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114134 s, 91.9 MB/s 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.515 256+0 records in 00:05:23.515 256+0 records out 00:05:23.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184725 s, 56.8 MB/s 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.515 256+0 records in 00:05:23.515 256+0 records out 00:05:23.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021055 s, 49.8 MB/s 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.515 23:47:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.776 23:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.067 23:47:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.067 23:47:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.339 23:47:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.599 [2024-05-14 23:47:24.937562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.599 [2024-05-14 23:47:25.001513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.599 [2024-05-14 23:47:25.001516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.599 [2024-05-14 23:47:25.043241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.599 [2024-05-14 23:47:25.043286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.893 23:47:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3404014 /var/tmp/spdk-nbd.sock 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3404014 ']' 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:27.893 23:47:27 event.app_repeat -- event/event.sh@39 -- # killprocess 3404014 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3404014 ']' 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3404014 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3404014 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3404014' 00:05:27.893 killing process with pid 3404014 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3404014 00:05:27.893 23:47:27 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3404014 00:05:27.893 spdk_app_start is called in Round 0. 00:05:27.893 Shutdown signal received, stop current app iteration 00:05:27.893 Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 reinitialization... 00:05:27.893 spdk_app_start is called in Round 1. 00:05:27.893 Shutdown signal received, stop current app iteration 00:05:27.893 Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 reinitialization... 00:05:27.893 spdk_app_start is called in Round 2. 00:05:27.893 Shutdown signal received, stop current app iteration 00:05:27.893 Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 reinitialization... 00:05:27.893 spdk_app_start is called in Round 3. 00:05:27.893 Shutdown signal received, stop current app iteration 00:05:27.893 23:47:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.893 23:47:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.893 00:05:27.893 real 0m16.250s 00:05:27.893 user 0m34.479s 00:05:27.893 sys 0m2.966s 00:05:27.893 23:47:28 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.893 23:47:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.893 ************************************ 00:05:27.893 END TEST app_repeat 00:05:27.893 ************************************ 00:05:27.893 23:47:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.893 23:47:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.893 23:47:28 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.893 23:47:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.893 23:47:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.893 ************************************ 00:05:27.893 START TEST cpu_locks 00:05:27.893 ************************************ 00:05:27.893 23:47:28 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.893 * Looking for test storage... 00:05:27.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:27.893 23:47:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.893 23:47:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.893 23:47:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.893 23:47:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.893 23:47:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.893 23:47:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.893 23:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.893 ************************************ 00:05:27.893 START TEST default_locks 00:05:27.893 ************************************ 00:05:27.893 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:27.893 23:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3407104 00:05:27.893 23:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3407104 00:05:27.893 23:47:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3407104 ']' 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.894 23:47:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.894 [2024-05-14 23:47:28.435948] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:27.894 [2024-05-14 23:47:28.435997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407104 ] 00:05:27.894 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.153 [2024-05-14 23:47:28.505075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.153 [2024-05-14 23:47:28.578842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.722 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.722 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:28.722 23:47:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3407104 00:05:28.722 23:47:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3407104 00:05:28.722 23:47:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.660 lslocks: write error 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3407104 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3407104 ']' 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3407104 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3407104 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3407104' 00:05:29.660 killing process with pid 3407104 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3407104 00:05:29.660 23:47:29 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3407104 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3407104 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3407104 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3407104 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3407104 ']' 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.919 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3407104) - No such process 00:05:29.919 ERROR: process (pid: 3407104) is no longer running 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.920 00:05:29.920 real 0m1.913s 00:05:29.920 user 0m1.993s 00:05:29.920 sys 0m0.717s 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.920 23:47:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.920 ************************************ 00:05:29.920 END TEST default_locks 00:05:29.920 ************************************ 00:05:29.920 23:47:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.920 23:47:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.920 23:47:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.920 23:47:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.920 ************************************ 00:05:29.920 START TEST default_locks_via_rpc 00:05:29.920 ************************************ 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3407473 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3407473 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3407473 ']' 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.920 23:47:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.920 [2024-05-14 23:47:30.427485] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:29.920 [2024-05-14 23:47:30.427530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407473 ] 00:05:29.920 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.920 [2024-05-14 23:47:30.496710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.179 [2024-05-14 23:47:30.571546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.746 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3407473 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3407473 00:05:30.747 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3407473 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3407473 ']' 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3407473 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3407473 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3407473' 00:05:31.315 killing process with pid 3407473 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3407473 00:05:31.315 23:47:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3407473 00:05:31.574 00:05:31.574 real 0m1.650s 00:05:31.574 user 0m1.705s 00:05:31.574 sys 0m0.578s 00:05:31.574 23:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.574 23:47:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.574 ************************************ 00:05:31.574 END TEST default_locks_via_rpc 00:05:31.574 ************************************ 00:05:31.574 23:47:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.574 23:47:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.574 23:47:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.574 23:47:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.574 ************************************ 00:05:31.574 START TEST non_locking_app_on_locked_coremask 00:05:31.574 ************************************ 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3407767 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3407767 /var/tmp/spdk.sock 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3407767 ']' 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.574 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.575 [2024-05-14 23:47:32.160617] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:31.575 [2024-05-14 23:47:32.160661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407767 ] 00:05:31.834 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.834 [2024-05-14 23:47:32.228686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.834 [2024-05-14 23:47:32.301592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3407923 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3407923 /var/tmp/spdk2.sock 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3407923 ']' 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.402 23:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.402 [2024-05-14 23:47:32.987372] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:32.402 [2024-05-14 23:47:32.987424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407923 ] 00:05:32.661 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.661 [2024-05-14 23:47:33.081524] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.661 [2024-05-14 23:47:33.081550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.661 [2024-05-14 23:47:33.226030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.229 23:47:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.229 23:47:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:33.229 23:47:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3407767 00:05:33.229 23:47:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3407767 00:05:33.229 23:47:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.167 lslocks: write error 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3407767 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3407767 ']' 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3407767 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3407767 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3407767' 00:05:34.167 killing process with pid 3407767 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3407767 00:05:34.167 23:47:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3407767 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3407923 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3407923 ']' 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3407923 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.735 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3407923 00:05:34.994 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.994 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.994 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3407923' 00:05:34.994 killing process with pid 3407923 00:05:34.994 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3407923 00:05:34.994 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3407923 00:05:35.255 00:05:35.255 real 0m3.575s 00:05:35.255 user 0m3.818s 00:05:35.255 sys 0m1.142s 00:05:35.255 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.255 23:47:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.255 ************************************ 00:05:35.255 END TEST non_locking_app_on_locked_coremask 00:05:35.255 ************************************ 00:05:35.255 23:47:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:35.255 23:47:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.255 23:47:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.255 23:47:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.255 ************************************ 00:05:35.255 START TEST locking_app_on_unlocked_coremask 00:05:35.255 ************************************ 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3408372 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3408372 /var/tmp/spdk.sock 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3408372 ']' 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.255 23:47:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.255 [2024-05-14 23:47:35.822788] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:35.255 [2024-05-14 23:47:35.822838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408372 ] 00:05:35.515 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.515 [2024-05-14 23:47:35.891052] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.515 [2024-05-14 23:47:35.891079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.515 [2024-05-14 23:47:35.964568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.083 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.083 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3408614 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3408614 /var/tmp/spdk2.sock 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3408614 ']' 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.084 23:47:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.084 [2024-05-14 23:47:36.655430] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:36.084 [2024-05-14 23:47:36.655486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408614 ] 00:05:36.343 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.343 [2024-05-14 23:47:36.755120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.343 [2024-05-14 23:47:36.892791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.911 23:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.911 23:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:36.911 23:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3408614 00:05:36.911 23:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3408614 00:05:36.911 23:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.288 lslocks: write error 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3408372 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3408372 ']' 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3408372 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3408372 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3408372' 00:05:38.288 killing process with pid 3408372 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3408372 00:05:38.288 23:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3408372 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3408614 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3408614 ']' 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3408614 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3408614 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3408614' 00:05:38.856 killing process with pid 3408614 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3408614 00:05:38.856 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3408614 00:05:39.471 00:05:39.471 real 0m4.000s 00:05:39.471 user 0m4.267s 00:05:39.471 sys 0m1.342s 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 END TEST locking_app_on_unlocked_coremask 00:05:39.471 ************************************ 00:05:39.471 23:47:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.471 23:47:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.471 23:47:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.471 23:47:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 START TEST locking_app_on_locked_coremask 00:05:39.471 ************************************ 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3409183 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3409183 /var/tmp/spdk.sock 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3409183 ']' 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.471 23:47:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 [2024-05-14 23:47:39.885515] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:39.471 [2024-05-14 23:47:39.885556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409183 ] 00:05:39.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.471 [2024-05-14 23:47:39.953421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.471 [2024-05-14 23:47:40.031280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3409380 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3409380 /var/tmp/spdk2.sock 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3409380 /var/tmp/spdk2.sock 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3409380 /var/tmp/spdk2.sock 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3409380 ']' 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.410 23:47:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.410 [2024-05-14 23:47:40.728229] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:40.410 [2024-05-14 23:47:40.728280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409380 ] 00:05:40.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.410 [2024-05-14 23:47:40.824536] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3409183 has claimed it. 00:05:40.410 [2024-05-14 23:47:40.824573] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3409380) - No such process 00:05:40.979 ERROR: process (pid: 3409380) is no longer running 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3409183 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3409183 00:05:40.979 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.239 lslocks: write error 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3409183 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3409183 ']' 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3409183 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409183 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409183' 00:05:41.239 killing process with pid 3409183 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3409183 00:05:41.239 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3409183 00:05:41.498 00:05:41.498 real 0m2.116s 00:05:41.498 user 0m2.319s 00:05:41.498 sys 0m0.592s 00:05:41.498 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.498 23:47:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.498 ************************************ 00:05:41.498 END TEST locking_app_on_locked_coremask 00:05:41.498 ************************************ 00:05:41.498 23:47:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.498 23:47:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.498 23:47:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.498 23:47:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.498 ************************************ 00:05:41.498 START TEST locking_overlapped_coremask 00:05:41.498 ************************************ 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3409587 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3409587 /var/tmp/spdk.sock 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3409587 ']' 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.498 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.758 [2024-05-14 23:47:42.102406] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:41.758 [2024-05-14 23:47:42.102448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409587 ] 00:05:41.758 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.758 [2024-05-14 23:47:42.170373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.758 [2024-05-14 23:47:42.241873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.758 [2024-05-14 23:47:42.241891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.758 [2024-05-14 23:47:42.241893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3409760 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3409760 /var/tmp/spdk2.sock 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3409760 /var/tmp/spdk2.sock 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.326 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3409760 /var/tmp/spdk2.sock 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3409760 ']' 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.327 23:47:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.586 [2024-05-14 23:47:42.942814] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:42.586 [2024-05-14 23:47:42.942865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409760 ] 00:05:42.586 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.586 [2024-05-14 23:47:43.040835] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3409587 has claimed it. 00:05:42.586 [2024-05-14 23:47:43.040872] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3409760) - No such process 00:05:43.155 ERROR: process (pid: 3409760) is no longer running 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3409587 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3409587 ']' 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3409587 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409587 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409587' 00:05:43.155 killing process with pid 3409587 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3409587 00:05:43.155 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3409587 00:05:43.415 00:05:43.415 real 0m1.901s 00:05:43.415 user 0m5.255s 00:05:43.415 sys 0m0.476s 00:05:43.415 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.415 23:47:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.415 ************************************ 00:05:43.415 END TEST locking_overlapped_coremask 00:05:43.415 ************************************ 00:05:43.415 23:47:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.415 23:47:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.415 23:47:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.415 23:47:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.675 ************************************ 00:05:43.675 START TEST locking_overlapped_coremask_via_rpc 00:05:43.675 ************************************ 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3410050 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3410050 /var/tmp/spdk.sock 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3410050 ']' 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.675 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.675 [2024-05-14 23:47:44.090361] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:43.675 [2024-05-14 23:47:44.090410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410050 ] 00:05:43.675 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.675 [2024-05-14 23:47:44.159794] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.675 [2024-05-14 23:47:44.159821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.675 [2024-05-14 23:47:44.225880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.675 [2024-05-14 23:47:44.225977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.675 [2024-05-14 23:47:44.225980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3410068 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3410068 /var/tmp/spdk2.sock 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3410068 ']' 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.613 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.614 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.614 23:47:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.614 [2024-05-14 23:47:44.927227] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:44.614 [2024-05-14 23:47:44.927280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410068 ] 00:05:44.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.614 [2024-05-14 23:47:45.028798] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.614 [2024-05-14 23:47:45.028826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.614 [2024-05-14 23:47:45.183177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.614 [2024-05-14 23:47:45.186239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.614 [2024-05-14 23:47:45.186240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.183 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.183 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:45.183 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.183 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.183 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.184 [2024-05-14 23:47:45.752257] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3410050 has claimed it. 00:05:45.184 request: 00:05:45.184 { 00:05:45.184 "method": "framework_enable_cpumask_locks", 00:05:45.184 "req_id": 1 00:05:45.184 } 00:05:45.184 Got JSON-RPC error response 00:05:45.184 response: 00:05:45.184 { 00:05:45.184 "code": -32603, 00:05:45.184 "message": "Failed to claim CPU core: 2" 00:05:45.184 } 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3410050 /var/tmp/spdk.sock 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3410050 ']' 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.184 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3410068 /var/tmp/spdk2.sock 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3410068 ']' 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.443 23:47:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.704 00:05:45.704 real 0m2.089s 00:05:45.704 user 0m0.808s 00:05:45.704 sys 0m0.209s 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.704 23:47:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.704 ************************************ 00:05:45.704 END TEST locking_overlapped_coremask_via_rpc 00:05:45.704 ************************************ 00:05:45.704 23:47:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:45.704 23:47:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3410050 ]] 00:05:45.704 23:47:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3410050 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3410050 ']' 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3410050 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410050 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410050' 00:05:45.704 killing process with pid 3410050 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3410050 00:05:45.704 23:47:46 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3410050 00:05:46.273 23:47:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3410068 ]] 00:05:46.273 23:47:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3410068 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3410068 ']' 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3410068 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410068 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410068' 00:05:46.273 killing process with pid 3410068 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3410068 00:05:46.273 23:47:46 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3410068 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3410050 ]] 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3410050 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3410050 ']' 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3410050 00:05:46.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3410050) - No such process 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3410050 is not found' 00:05:46.533 Process with pid 3410050 is not found 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3410068 ]] 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3410068 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3410068 ']' 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3410068 00:05:46.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3410068) - No such process 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3410068 is not found' 00:05:46.533 Process with pid 3410068 is not found 00:05:46.533 23:47:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.533 00:05:46.533 real 0m18.747s 00:05:46.533 user 0m30.861s 00:05:46.533 sys 0m6.107s 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.533 23:47:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.533 ************************************ 00:05:46.533 END TEST cpu_locks 00:05:46.533 ************************************ 00:05:46.533 00:05:46.533 real 0m44.513s 00:05:46.533 user 1m22.598s 00:05:46.533 sys 0m10.183s 00:05:46.533 23:47:47 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.533 23:47:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.533 ************************************ 00:05:46.533 END TEST event 00:05:46.533 ************************************ 00:05:46.533 23:47:47 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.533 23:47:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.533 23:47:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.533 23:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.533 ************************************ 00:05:46.533 START TEST thread 00:05:46.533 ************************************ 00:05:46.533 23:47:47 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.793 * Looking for test storage... 00:05:46.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:46.793 23:47:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.793 23:47:47 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:46.793 23:47:47 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.793 23:47:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.793 ************************************ 00:05:46.793 START TEST thread_poller_perf 00:05:46.793 ************************************ 00:05:46.793 23:47:47 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.793 [2024-05-14 23:47:47.279063] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:46.793 [2024-05-14 23:47:47.279144] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410688 ] 00:05:46.793 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.793 [2024-05-14 23:47:47.350975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.051 [2024-05-14 23:47:47.421017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.051 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.988 ====================================== 00:05:47.988 busy:2505579686 (cyc) 00:05:47.988 total_run_count: 427000 00:05:47.988 tsc_hz: 2500000000 (cyc) 00:05:47.988 ====================================== 00:05:47.988 poller_cost: 5867 (cyc), 2346 (nsec) 00:05:47.988 00:05:47.988 real 0m1.256s 00:05:47.988 user 0m1.158s 00:05:47.988 sys 0m0.093s 00:05:47.988 23:47:48 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.988 23:47:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 ************************************ 00:05:47.988 END TEST thread_poller_perf 00:05:47.988 ************************************ 00:05:47.988 23:47:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.988 23:47:48 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:47.988 23:47:48 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.988 23:47:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.248 ************************************ 00:05:48.248 START TEST thread_poller_perf 00:05:48.248 ************************************ 00:05:48.248 23:47:48 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.248 [2024-05-14 23:47:48.629933] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:48.248 [2024-05-14 23:47:48.630019] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410976 ] 00:05:48.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.248 [2024-05-14 23:47:48.702780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.248 [2024-05-14 23:47:48.771452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.248 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.627 ====================================== 00:05:49.627 busy:2501619004 (cyc) 00:05:49.627 total_run_count: 5665000 00:05:49.627 tsc_hz: 2500000000 (cyc) 00:05:49.627 ====================================== 00:05:49.627 poller_cost: 441 (cyc), 176 (nsec) 00:05:49.627 00:05:49.627 real 0m1.252s 00:05:49.627 user 0m1.161s 00:05:49.627 sys 0m0.086s 00:05:49.627 23:47:49 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.627 23:47:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.627 ************************************ 00:05:49.627 END TEST thread_poller_perf 00:05:49.627 ************************************ 00:05:49.627 23:47:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:49.627 00:05:49.627 real 0m2.797s 00:05:49.627 user 0m2.425s 00:05:49.627 sys 0m0.376s 00:05:49.627 23:47:49 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.627 23:47:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.627 ************************************ 00:05:49.627 END TEST thread 00:05:49.627 ************************************ 00:05:49.627 23:47:49 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:49.627 23:47:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.627 23:47:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.627 23:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:49.627 ************************************ 00:05:49.627 START TEST accel 00:05:49.627 ************************************ 00:05:49.627 23:47:49 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:49.627 * Looking for test storage... 00:05:49.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:49.627 23:47:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:49.627 23:47:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:49.628 23:47:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.628 23:47:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3411295 00:05:49.628 23:47:50 accel -- accel/accel.sh@63 -- # waitforlisten 3411295 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@827 -- # '[' -z 3411295 ']' 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.628 23:47:50 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.628 23:47:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.628 23:47:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.628 23:47:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.628 23:47:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.628 23:47:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.628 23:47:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.628 23:47:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.628 23:47:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:49.628 23:47:50 accel -- accel/accel.sh@41 -- # jq -r . 00:05:49.628 [2024-05-14 23:47:50.152229] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:49.628 [2024-05-14 23:47:50.152279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411295 ] 00:05:49.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.887 [2024-05-14 23:47:50.221826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.887 [2024-05-14 23:47:50.295836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.458 23:47:50 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.458 23:47:50 accel -- common/autotest_common.sh@860 -- # return 0 00:05:50.458 23:47:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:50.458 23:47:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:50.458 23:47:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:50.458 23:47:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:50.458 23:47:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:50.458 23:47:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:50.458 23:47:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:50.458 23:47:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.458 23:47:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.458 23:47:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.458 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.458 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.458 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.459 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.459 23:47:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.459 23:47:50 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.459 23:47:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.459 23:47:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.459 23:47:50 accel -- accel/accel.sh@75 -- # killprocess 3411295 00:05:50.459 23:47:51 accel -- common/autotest_common.sh@946 -- # '[' -z 3411295 ']' 00:05:50.459 23:47:51 accel -- common/autotest_common.sh@950 -- # kill -0 3411295 00:05:50.459 23:47:51 accel -- common/autotest_common.sh@951 -- # uname 00:05:50.459 23:47:51 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:50.459 23:47:51 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411295 00:05:50.718 23:47:51 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:50.718 23:47:51 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:50.718 23:47:51 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411295' 00:05:50.718 killing process with pid 3411295 00:05:50.718 23:47:51 accel -- common/autotest_common.sh@965 -- # kill 3411295 00:05:50.718 23:47:51 accel -- common/autotest_common.sh@970 -- # wait 3411295 00:05:50.978 23:47:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:50.978 23:47:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 23:47:51 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:50.978 23:47:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:50.978 23:47:51 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.978 23:47:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 23:47:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.978 23:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.978 ************************************ 00:05:50.978 START TEST accel_missing_filename 00:05:50.978 ************************************ 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.978 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:50.978 23:47:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:51.238 [2024-05-14 23:47:51.581866] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:51.238 [2024-05-14 23:47:51.581924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411571 ] 00:05:51.238 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.238 [2024-05-14 23:47:51.655718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.238 [2024-05-14 23:47:51.728054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.238 [2024-05-14 23:47:51.769231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.238 [2024-05-14 23:47:51.829079] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:51.498 A filename is required. 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.498 00:05:51.498 real 0m0.368s 00:05:51.498 user 0m0.265s 00:05:51.498 sys 0m0.143s 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.498 23:47:51 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 ************************************ 00:05:51.498 END TEST accel_missing_filename 00:05:51.498 ************************************ 00:05:51.498 23:47:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.498 23:47:51 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:51.498 23:47:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.498 23:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 ************************************ 00:05:51.498 START TEST accel_compress_verify 00:05:51.498 ************************************ 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.498 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:51.498 23:47:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:51.498 [2024-05-14 23:47:52.037377] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:51.498 [2024-05-14 23:47:52.037439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411632 ] 00:05:51.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.758 [2024-05-14 23:47:52.110061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.758 [2024-05-14 23:47:52.178088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.758 [2024-05-14 23:47:52.219035] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.758 [2024-05-14 23:47:52.279087] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:52.019 00:05:52.019 Compression does not support the verify option, aborting. 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.019 00:05:52.019 real 0m0.362s 00:05:52.019 user 0m0.269s 00:05:52.019 sys 0m0.133s 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.019 23:47:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 ************************************ 00:05:52.019 END TEST accel_compress_verify 00:05:52.019 ************************************ 00:05:52.019 23:47:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 ************************************ 00:05:52.019 START TEST accel_wrong_workload 00:05:52.019 ************************************ 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:52.019 23:47:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:52.019 Unsupported workload type: foobar 00:05:52.019 [2024-05-14 23:47:52.481388] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:52.019 accel_perf options: 00:05:52.019 [-h help message] 00:05:52.019 [-q queue depth per core] 00:05:52.019 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:52.019 [-T number of threads per core 00:05:52.019 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:52.019 [-t time in seconds] 00:05:52.019 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:52.019 [ dif_verify, , dif_generate, dif_generate_copy 00:05:52.019 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:52.019 [-l for compress/decompress workloads, name of uncompressed input file 00:05:52.019 [-S for crc32c workload, use this seed value (default 0) 00:05:52.019 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:52.019 [-f for fill workload, use this BYTE value (default 255) 00:05:52.019 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:52.019 [-y verify result if this switch is on] 00:05:52.019 [-a tasks to allocate per core (default: same value as -q)] 00:05:52.019 Can be used to spread operations across a wider range of memory. 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.019 00:05:52.019 real 0m0.037s 00:05:52.019 user 0m0.017s 00:05:52.019 sys 0m0.019s 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.019 23:47:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 ************************************ 00:05:52.019 END TEST accel_wrong_workload 00:05:52.019 ************************************ 00:05:52.019 Error: writing output failed: Broken pipe 00:05:52.019 23:47:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.019 23:47:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 ************************************ 00:05:52.019 START TEST accel_negative_buffers 00:05:52.019 ************************************ 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:52.019 23:47:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:52.019 -x option must be non-negative. 00:05:52.019 [2024-05-14 23:47:52.602736] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:52.019 accel_perf options: 00:05:52.019 [-h help message] 00:05:52.019 [-q queue depth per core] 00:05:52.019 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:52.019 [-T number of threads per core 00:05:52.019 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:52.019 [-t time in seconds] 00:05:52.019 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:52.019 [ dif_verify, , dif_generate, dif_generate_copy 00:05:52.019 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:52.019 [-l for compress/decompress workloads, name of uncompressed input file 00:05:52.019 [-S for crc32c workload, use this seed value (default 0) 00:05:52.019 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:52.019 [-f for fill workload, use this BYTE value (default 255) 00:05:52.019 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:52.019 [-y verify result if this switch is on] 00:05:52.019 [-a tasks to allocate per core (default: same value as -q)] 00:05:52.019 Can be used to spread operations across a wider range of memory. 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.019 00:05:52.019 real 0m0.037s 00:05:52.019 user 0m0.014s 00:05:52.019 sys 0m0.022s 00:05:52.019 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.020 23:47:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:52.020 ************************************ 00:05:52.020 END TEST accel_negative_buffers 00:05:52.020 ************************************ 00:05:52.280 Error: writing output failed: Broken pipe 00:05:52.280 23:47:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:52.280 23:47:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:52.280 23:47:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.280 23:47:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.280 ************************************ 00:05:52.280 START TEST accel_crc32c 00:05:52.280 ************************************ 00:05:52.280 23:47:52 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:52.280 23:47:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:52.280 [2024-05-14 23:47:52.725394] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:52.280 [2024-05-14 23:47:52.725451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411721 ] 00:05:52.280 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.280 [2024-05-14 23:47:52.797245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.280 [2024-05-14 23:47:52.870288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.540 23:47:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.535 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.535 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.535 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:53.536 23:47:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.536 00:05:53.536 real 0m1.369s 00:05:53.536 user 0m1.245s 00:05:53.536 sys 0m0.130s 00:05:53.536 23:47:54 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.536 23:47:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:53.536 ************************************ 00:05:53.536 END TEST accel_crc32c 00:05:53.536 ************************************ 00:05:53.536 23:47:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:53.536 23:47:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:53.536 23:47:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.536 23:47:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.795 ************************************ 00:05:53.795 START TEST accel_crc32c_C2 00:05:53.795 ************************************ 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.795 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:53.796 [2024-05-14 23:47:54.159783] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:53.796 [2024-05-14 23:47:54.159833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411990 ] 00:05:53.796 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.796 [2024-05-14 23:47:54.227276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.796 [2024-05-14 23:47:54.295833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.796 23:47:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.175 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.176 00:05:55.176 real 0m1.343s 00:05:55.176 user 0m1.225s 00:05:55.176 sys 0m0.123s 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.176 23:47:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:55.176 ************************************ 00:05:55.176 END TEST accel_crc32c_C2 00:05:55.176 ************************************ 00:05:55.176 23:47:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:55.176 23:47:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:55.176 23:47:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.176 23:47:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.176 ************************************ 00:05:55.176 START TEST accel_copy 00:05:55.176 ************************************ 00:05:55.176 23:47:55 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:55.176 [2024-05-14 23:47:55.579086] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:55.176 [2024-05-14 23:47:55.579123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412270 ] 00:05:55.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.176 [2024-05-14 23:47:55.646726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.176 [2024-05-14 23:47:55.714425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.176 23:47:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:56.556 23:47:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.556 00:05:56.556 real 0m1.343s 00:05:56.556 user 0m1.228s 00:05:56.556 sys 0m0.119s 00:05:56.556 23:47:56 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.556 23:47:56 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:56.556 ************************************ 00:05:56.556 END TEST accel_copy 00:05:56.556 ************************************ 00:05:56.556 23:47:56 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.556 23:47:56 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:56.556 23:47:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.556 23:47:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.556 ************************************ 00:05:56.556 START TEST accel_fill 00:05:56.556 ************************************ 00:05:56.556 23:47:56 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:56.556 23:47:56 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:56.556 [2024-05-14 23:47:57.009275] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:56.556 [2024-05-14 23:47:57.009333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412555 ] 00:05:56.556 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.556 [2024-05-14 23:47:57.077651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.556 [2024-05-14 23:47:57.145587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.816 23:47:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:57.754 23:47:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.754 00:05:57.754 real 0m1.356s 00:05:57.754 user 0m1.227s 00:05:57.754 sys 0m0.133s 00:05:57.754 23:47:58 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.754 23:47:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 ************************************ 00:05:57.754 END TEST accel_fill 00:05:57.754 ************************************ 00:05:58.014 23:47:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:58.014 23:47:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:58.014 23:47:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.014 23:47:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.014 ************************************ 00:05:58.014 START TEST accel_copy_crc32c 00:05:58.014 ************************************ 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:58.014 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:58.014 [2024-05-14 23:47:58.449854] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:58.014 [2024-05-14 23:47:58.449930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412841 ] 00:05:58.014 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.014 [2024-05-14 23:47:58.520385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.014 [2024-05-14 23:47:58.588496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:58.273 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.274 23:47:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.210 00:05:59.210 real 0m1.360s 00:05:59.210 user 0m1.237s 00:05:59.210 sys 0m0.128s 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.210 23:47:59 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:59.210 ************************************ 00:05:59.210 END TEST accel_copy_crc32c 00:05:59.210 ************************************ 00:05:59.469 23:47:59 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.469 23:47:59 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:59.469 23:47:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.469 23:47:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.469 ************************************ 00:05:59.469 START TEST accel_copy_crc32c_C2 00:05:59.469 ************************************ 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.469 23:47:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:59.469 [2024-05-14 23:47:59.889150] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:05:59.469 [2024-05-14 23:47:59.889235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413127 ] 00:05:59.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.469 [2024-05-14 23:47:59.959157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.469 [2024-05-14 23:48:00.033943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.727 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.728 23:48:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.664 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.665 00:06:00.665 real 0m1.369s 00:06:00.665 user 0m1.238s 00:06:00.665 sys 0m0.134s 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.665 23:48:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:00.665 ************************************ 00:06:00.665 END TEST accel_copy_crc32c_C2 00:06:00.665 ************************************ 00:06:00.924 23:48:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:00.924 23:48:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:00.924 23:48:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.924 23:48:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.924 ************************************ 00:06:00.924 START TEST accel_dualcast 00:06:00.924 ************************************ 00:06:00.924 23:48:01 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:00.924 [2024-05-14 23:48:01.334329] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:00.924 [2024-05-14 23:48:01.334387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413503 ] 00:06:00.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.924 [2024-05-14 23:48:01.405403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.924 [2024-05-14 23:48:01.470333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.924 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.925 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.184 23:48:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:02.121 23:48:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.121 00:06:02.121 real 0m1.358s 00:06:02.121 user 0m1.237s 00:06:02.121 sys 0m0.125s 00:06:02.121 23:48:02 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.121 23:48:02 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:02.121 ************************************ 00:06:02.121 END TEST accel_dualcast 00:06:02.121 ************************************ 00:06:02.121 23:48:02 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:02.121 23:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:02.121 23:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.121 23:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 ************************************ 00:06:02.381 START TEST accel_compare 00:06:02.381 ************************************ 00:06:02.381 23:48:02 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:02.381 [2024-05-14 23:48:02.774703] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:02.381 [2024-05-14 23:48:02.774762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413812 ] 00:06:02.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.381 [2024-05-14 23:48:02.845380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.381 [2024-05-14 23:48:02.916664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.381 23:48:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:03.760 23:48:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.760 00:06:03.760 real 0m1.366s 00:06:03.760 user 0m1.235s 00:06:03.760 sys 0m0.135s 00:06:03.760 23:48:04 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.760 23:48:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:03.760 ************************************ 00:06:03.760 END TEST accel_compare 00:06:03.760 ************************************ 00:06:03.760 23:48:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:03.760 23:48:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:03.760 23:48:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.760 23:48:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.760 ************************************ 00:06:03.760 START TEST accel_xor 00:06:03.760 ************************************ 00:06:03.760 23:48:04 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:03.760 23:48:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:03.760 [2024-05-14 23:48:04.219669] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:03.760 [2024-05-14 23:48:04.219746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414156 ] 00:06:03.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.760 [2024-05-14 23:48:04.289343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.019 [2024-05-14 23:48:04.359954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.019 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.019 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.020 23:48:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.398 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.398 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.398 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.398 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.398 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.399 00:06:05.399 real 0m1.366s 00:06:05.399 user 0m1.245s 00:06:05.399 sys 0m0.123s 00:06:05.399 23:48:05 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.399 23:48:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.399 ************************************ 00:06:05.399 END TEST accel_xor 00:06:05.399 ************************************ 00:06:05.399 23:48:05 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:05.399 23:48:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:05.399 23:48:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.399 23:48:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.399 ************************************ 00:06:05.399 START TEST accel_xor 00:06:05.399 ************************************ 00:06:05.399 23:48:05 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:05.399 [2024-05-14 23:48:05.661662] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:05.399 [2024-05-14 23:48:05.661724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414753 ] 00:06:05.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.399 [2024-05-14 23:48:05.731626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.399 [2024-05-14 23:48:05.802980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.399 23:48:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:06.778 23:48:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.778 00:06:06.778 real 0m1.363s 00:06:06.778 user 0m1.246s 00:06:06.778 sys 0m0.121s 00:06:06.778 23:48:06 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.778 23:48:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:06.778 ************************************ 00:06:06.778 END TEST accel_xor 00:06:06.778 ************************************ 00:06:06.778 23:48:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:06.778 23:48:07 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:06.778 23:48:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.778 23:48:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.778 ************************************ 00:06:06.778 START TEST accel_dif_verify 00:06:06.778 ************************************ 00:06:06.778 23:48:07 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:06.778 23:48:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:06.778 23:48:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:06.778 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.778 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.778 23:48:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:06.779 [2024-05-14 23:48:07.096692] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:06.779 [2024-05-14 23:48:07.096735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415107 ] 00:06:06.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.779 [2024-05-14 23:48:07.163528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.779 [2024-05-14 23:48:07.231659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.779 23:48:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:08.158 23:48:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.158 00:06:08.158 real 0m1.344s 00:06:08.158 user 0m1.234s 00:06:08.158 sys 0m0.115s 00:06:08.158 23:48:08 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.158 23:48:08 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 END TEST accel_dif_verify 00:06:08.158 ************************************ 00:06:08.158 23:48:08 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:08.158 23:48:08 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:08.158 23:48:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.158 23:48:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 START TEST accel_dif_generate 00:06:08.158 ************************************ 00:06:08.158 23:48:08 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:08.158 23:48:08 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:08.159 [2024-05-14 23:48:08.532420] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:08.159 [2024-05-14 23:48:08.532476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415333 ] 00:06:08.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.159 [2024-05-14 23:48:08.601925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.159 [2024-05-14 23:48:08.671675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.159 23:48:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:09.549 23:48:09 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.549 00:06:09.549 real 0m1.359s 00:06:09.549 user 0m1.229s 00:06:09.549 sys 0m0.136s 00:06:09.549 23:48:09 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.549 23:48:09 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 END TEST accel_dif_generate 00:06:09.549 ************************************ 00:06:09.549 23:48:09 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.549 23:48:09 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:09.549 23:48:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.549 23:48:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 START TEST accel_dif_generate_copy 00:06:09.549 ************************************ 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:09.549 23:48:09 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:09.549 [2024-05-14 23:48:09.976839] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:09.549 [2024-05-14 23:48:09.976900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415581 ] 00:06:09.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.549 [2024-05-14 23:48:10.049731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.549 [2024-05-14 23:48:10.128007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.809 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.810 23:48:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.749 00:06:10.749 real 0m1.375s 00:06:10.749 user 0m1.241s 00:06:10.749 sys 0m0.138s 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.749 23:48:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.749 ************************************ 00:06:10.749 END TEST accel_dif_generate_copy 00:06:10.749 ************************************ 00:06:11.009 23:48:11 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:11.009 23:48:11 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.009 23:48:11 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:11.009 23:48:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.009 23:48:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.009 ************************************ 00:06:11.009 START TEST accel_comp 00:06:11.009 ************************************ 00:06:11.009 23:48:11 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:11.009 23:48:11 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:11.009 [2024-05-14 23:48:11.434142] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:11.009 [2024-05-14 23:48:11.434228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415829 ] 00:06:11.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.009 [2024-05-14 23:48:11.504528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.009 [2024-05-14 23:48:11.573872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.269 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.270 23:48:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:12.206 23:48:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.206 00:06:12.206 real 0m1.365s 00:06:12.206 user 0m1.241s 00:06:12.206 sys 0m0.129s 00:06:12.206 23:48:12 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.206 23:48:12 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:12.206 ************************************ 00:06:12.206 END TEST accel_comp 00:06:12.206 ************************************ 00:06:12.466 23:48:12 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.466 23:48:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:12.466 23:48:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.466 23:48:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.466 ************************************ 00:06:12.466 START TEST accel_decomp 00:06:12.466 ************************************ 00:06:12.466 23:48:12 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.466 23:48:12 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:12.466 23:48:12 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:12.466 23:48:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.466 23:48:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:12.467 23:48:12 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:12.467 [2024-05-14 23:48:12.881622] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:12.467 [2024-05-14 23:48:12.881681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416054 ] 00:06:12.467 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.467 [2024-05-14 23:48:12.950602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.467 [2024-05-14 23:48:13.019872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.727 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.728 23:48:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.667 23:48:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.667 00:06:13.667 real 0m1.362s 00:06:13.667 user 0m1.245s 00:06:13.667 sys 0m0.121s 00:06:13.667 23:48:14 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.667 23:48:14 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:13.667 ************************************ 00:06:13.667 END TEST accel_decomp 00:06:13.667 ************************************ 00:06:13.667 23:48:14 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.667 23:48:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:13.667 23:48:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.667 23:48:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.928 ************************************ 00:06:13.928 START TEST accel_decmop_full 00:06:13.928 ************************************ 00:06:13.928 23:48:14 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:13.928 [2024-05-14 23:48:14.316631] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:13.928 [2024-05-14 23:48:14.316688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416293 ] 00:06:13.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.928 [2024-05-14 23:48:14.385938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.928 [2024-05-14 23:48:14.455081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.928 23:48:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.308 23:48:15 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.308 00:06:15.308 real 0m1.371s 00:06:15.308 user 0m1.239s 00:06:15.308 sys 0m0.136s 00:06:15.308 23:48:15 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.308 23:48:15 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:15.308 ************************************ 00:06:15.308 END TEST accel_decmop_full 00:06:15.308 ************************************ 00:06:15.308 23:48:15 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.308 23:48:15 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:15.308 23:48:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.308 23:48:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.308 ************************************ 00:06:15.308 START TEST accel_decomp_mcore 00:06:15.308 ************************************ 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:15.308 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:15.308 [2024-05-14 23:48:15.755624] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:15.308 [2024-05-14 23:48:15.755679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416568 ] 00:06:15.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.308 [2024-05-14 23:48:15.824535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.308 [2024-05-14 23:48:15.896978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.308 [2024-05-14 23:48:15.897072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.308 [2024-05-14 23:48:15.897156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.308 [2024-05-14 23:48:15.897158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.568 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.569 23:48:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.507 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.768 00:06:16.768 real 0m1.367s 00:06:16.768 user 0m4.576s 00:06:16.768 sys 0m0.133s 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.768 23:48:17 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:16.768 ************************************ 00:06:16.768 END TEST accel_decomp_mcore 00:06:16.768 ************************************ 00:06:16.768 23:48:17 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.768 23:48:17 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:16.768 23:48:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.768 23:48:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.768 ************************************ 00:06:16.768 START TEST accel_decomp_full_mcore 00:06:16.768 ************************************ 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:16.768 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:16.768 [2024-05-14 23:48:17.227918] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:16.768 [2024-05-14 23:48:17.227981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416850 ] 00:06:16.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.768 [2024-05-14 23:48:17.298463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.028 [2024-05-14 23:48:17.372937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.028 [2024-05-14 23:48:17.373032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.028 [2024-05-14 23:48:17.373116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.028 [2024-05-14 23:48:17.373118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.028 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.029 23:48:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.410 00:06:18.410 real 0m1.393s 00:06:18.410 user 0m4.616s 00:06:18.410 sys 0m0.139s 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.410 23:48:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:18.410 ************************************ 00:06:18.410 END TEST accel_decomp_full_mcore 00:06:18.410 ************************************ 00:06:18.410 23:48:18 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.410 23:48:18 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:18.410 23:48:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.410 23:48:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.410 ************************************ 00:06:18.410 START TEST accel_decomp_mthread 00:06:18.410 ************************************ 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:18.410 [2024-05-14 23:48:18.714959] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:18.410 [2024-05-14 23:48:18.715021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417140 ] 00:06:18.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.410 [2024-05-14 23:48:18.785860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.410 [2024-05-14 23:48:18.856678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.410 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.411 23:48:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.791 00:06:19.791 real 0m1.379s 00:06:19.791 user 0m1.259s 00:06:19.791 sys 0m0.136s 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.791 23:48:20 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 ************************************ 00:06:19.791 END TEST accel_decomp_mthread 00:06:19.791 ************************************ 00:06:19.791 23:48:20 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.791 23:48:20 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:19.791 23:48:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.791 23:48:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 ************************************ 00:06:19.791 START TEST accel_decomp_full_mthread 00:06:19.791 ************************************ 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:19.791 [2024-05-14 23:48:20.168070] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:19.791 [2024-05-14 23:48:20.168113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417425 ] 00:06:19.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.791 [2024-05-14 23:48:20.238607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.791 [2024-05-14 23:48:20.308581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.791 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.792 23:48:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.174 00:06:21.174 real 0m1.377s 00:06:21.174 user 0m1.263s 00:06:21.174 sys 0m0.127s 00:06:21.174 23:48:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.175 23:48:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:21.175 ************************************ 00:06:21.175 END TEST accel_decomp_full_mthread 00:06:21.175 ************************************ 00:06:21.175 23:48:21 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:21.175 23:48:21 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.175 23:48:21 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:21.175 23:48:21 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:21.175 23:48:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.175 23:48:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.175 23:48:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.175 23:48:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.175 23:48:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.175 23:48:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.175 23:48:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.175 23:48:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:21.175 23:48:21 accel -- accel/accel.sh@41 -- # jq -r . 00:06:21.175 ************************************ 00:06:21.175 START TEST accel_dif_functional_tests 00:06:21.175 ************************************ 00:06:21.175 23:48:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.175 [2024-05-14 23:48:21.662765] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:21.175 [2024-05-14 23:48:21.662803] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417717 ] 00:06:21.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.175 [2024-05-14 23:48:21.728987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.434 [2024-05-14 23:48:21.799882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.434 [2024-05-14 23:48:21.799977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.434 [2024-05-14 23:48:21.799979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.434 00:06:21.434 00:06:21.434 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.434 http://cunit.sourceforge.net/ 00:06:21.434 00:06:21.434 00:06:21.434 Suite: accel_dif 00:06:21.434 Test: verify: DIF generated, GUARD check ...passed 00:06:21.434 Test: verify: DIF generated, APPTAG check ...passed 00:06:21.434 Test: verify: DIF generated, REFTAG check ...passed 00:06:21.434 Test: verify: DIF not generated, GUARD check ...[2024-05-14 23:48:21.868378] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.434 [2024-05-14 23:48:21.868425] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.434 passed 00:06:21.434 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 23:48:21.868455] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.434 [2024-05-14 23:48:21.868476] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.434 passed 00:06:21.434 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 23:48:21.868496] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.434 [2024-05-14 23:48:21.868514] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.434 passed 00:06:21.434 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:21.434 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-14 23:48:21.868573] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:21.434 passed 00:06:21.434 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:21.434 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:21.434 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:21.434 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-14 23:48:21.868676] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:21.434 passed 00:06:21.434 Test: generate copy: DIF generated, GUARD check ...passed 00:06:21.434 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:21.434 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:21.434 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:21.434 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:21.435 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:21.435 Test: generate copy: iovecs-len validate ...[2024-05-14 23:48:21.868844] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:21.435 passed 00:06:21.435 Test: generate copy: buffer alignment validate ...passed 00:06:21.435 00:06:21.435 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.435 suites 1 1 n/a 0 0 00:06:21.435 tests 20 20 20 0 0 00:06:21.435 asserts 204 204 204 0 n/a 00:06:21.435 00:06:21.435 Elapsed time = 0.000 seconds 00:06:21.694 00:06:21.694 real 0m0.440s 00:06:21.694 user 0m0.614s 00:06:21.694 sys 0m0.142s 00:06:21.694 23:48:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.694 23:48:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:21.694 ************************************ 00:06:21.694 END TEST accel_dif_functional_tests 00:06:21.694 ************************************ 00:06:21.694 00:06:21.694 real 0m32.107s 00:06:21.694 user 0m34.989s 00:06:21.694 sys 0m4.938s 00:06:21.694 23:48:22 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.694 23:48:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.694 ************************************ 00:06:21.694 END TEST accel 00:06:21.694 ************************************ 00:06:21.694 23:48:22 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:21.694 23:48:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.694 23:48:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.694 23:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.694 ************************************ 00:06:21.694 START TEST accel_rpc 00:06:21.694 ************************************ 00:06:21.694 23:48:22 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:21.954 * Looking for test storage... 00:06:21.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:21.954 23:48:22 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.954 23:48:22 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3417926 00:06:21.954 23:48:22 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3417926 00:06:21.954 23:48:22 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3417926 ']' 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.954 23:48:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.954 [2024-05-14 23:48:22.350084] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:21.954 [2024-05-14 23:48:22.350141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417926 ] 00:06:21.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.954 [2024-05-14 23:48:22.420577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.954 [2024-05-14 23:48:22.492957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 ************************************ 00:06:22.893 START TEST accel_assign_opcode 00:06:22.893 ************************************ 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 [2024-05-14 23:48:23.183008] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 [2024-05-14 23:48:23.191022] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.893 software 00:06:22.893 00:06:22.893 real 0m0.232s 00:06:22.893 user 0m0.041s 00:06:22.893 sys 0m0.011s 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.893 23:48:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 ************************************ 00:06:22.893 END TEST accel_assign_opcode 00:06:22.893 ************************************ 00:06:22.893 23:48:23 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3417926 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3417926 ']' 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3417926 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.893 23:48:23 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3417926 00:06:23.153 23:48:23 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.153 23:48:23 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.153 23:48:23 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3417926' 00:06:23.153 killing process with pid 3417926 00:06:23.153 23:48:23 accel_rpc -- common/autotest_common.sh@965 -- # kill 3417926 00:06:23.153 23:48:23 accel_rpc -- common/autotest_common.sh@970 -- # wait 3417926 00:06:23.487 00:06:23.487 real 0m1.647s 00:06:23.487 user 0m1.681s 00:06:23.487 sys 0m0.480s 00:06:23.487 23:48:23 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.487 23:48:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.487 ************************************ 00:06:23.487 END TEST accel_rpc 00:06:23.487 ************************************ 00:06:23.487 23:48:23 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:23.487 23:48:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.487 23:48:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.487 23:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.487 ************************************ 00:06:23.487 START TEST app_cmdline 00:06:23.487 ************************************ 00:06:23.487 23:48:23 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:23.487 * Looking for test storage... 00:06:23.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.487 23:48:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:23.487 23:48:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3418362 00:06:23.487 23:48:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:23.487 23:48:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3418362 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3418362 ']' 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.487 23:48:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.487 [2024-05-14 23:48:24.071623] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:23.487 [2024-05-14 23:48:24.071674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418362 ] 00:06:23.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.746 [2024-05-14 23:48:24.140847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.746 [2024-05-14 23:48:24.215720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.315 23:48:24 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.315 23:48:24 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:24.315 23:48:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:24.575 { 00:06:24.575 "version": "SPDK v24.05-pre git sha1 52939f252", 00:06:24.575 "fields": { 00:06:24.575 "major": 24, 00:06:24.575 "minor": 5, 00:06:24.575 "patch": 0, 00:06:24.575 "suffix": "-pre", 00:06:24.575 "commit": "52939f252" 00:06:24.575 } 00:06:24.575 } 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.575 23:48:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:24.575 23:48:25 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.834 request: 00:06:24.834 { 00:06:24.834 "method": "env_dpdk_get_mem_stats", 00:06:24.834 "req_id": 1 00:06:24.835 } 00:06:24.835 Got JSON-RPC error response 00:06:24.835 response: 00:06:24.835 { 00:06:24.835 "code": -32601, 00:06:24.835 "message": "Method not found" 00:06:24.835 } 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.835 23:48:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3418362 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3418362 ']' 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3418362 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418362 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418362' 00:06:24.835 killing process with pid 3418362 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@965 -- # kill 3418362 00:06:24.835 23:48:25 app_cmdline -- common/autotest_common.sh@970 -- # wait 3418362 00:06:25.094 00:06:25.094 real 0m1.729s 00:06:25.094 user 0m2.014s 00:06:25.094 sys 0m0.488s 00:06:25.094 23:48:25 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.094 23:48:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 END TEST app_cmdline 00:06:25.094 ************************************ 00:06:25.354 23:48:25 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:25.354 23:48:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.354 23:48:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.354 23:48:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.354 ************************************ 00:06:25.354 START TEST version 00:06:25.354 ************************************ 00:06:25.354 23:48:25 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:25.354 * Looking for test storage... 00:06:25.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:25.354 23:48:25 version -- app/version.sh@17 -- # get_header_version major 00:06:25.354 23:48:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # cut -f2 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.354 23:48:25 version -- app/version.sh@17 -- # major=24 00:06:25.354 23:48:25 version -- app/version.sh@18 -- # get_header_version minor 00:06:25.354 23:48:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # cut -f2 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.354 23:48:25 version -- app/version.sh@18 -- # minor=5 00:06:25.354 23:48:25 version -- app/version.sh@19 -- # get_header_version patch 00:06:25.354 23:48:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # cut -f2 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.354 23:48:25 version -- app/version.sh@19 -- # patch=0 00:06:25.354 23:48:25 version -- app/version.sh@20 -- # get_header_version suffix 00:06:25.354 23:48:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # cut -f2 00:06:25.354 23:48:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.354 23:48:25 version -- app/version.sh@20 -- # suffix=-pre 00:06:25.354 23:48:25 version -- app/version.sh@22 -- # version=24.5 00:06:25.354 23:48:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:25.354 23:48:25 version -- app/version.sh@28 -- # version=24.5rc0 00:06:25.354 23:48:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:25.354 23:48:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:25.354 23:48:25 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:25.354 23:48:25 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:25.354 00:06:25.354 real 0m0.185s 00:06:25.354 user 0m0.086s 00:06:25.354 sys 0m0.147s 00:06:25.354 23:48:25 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.354 23:48:25 version -- common/autotest_common.sh@10 -- # set +x 00:06:25.354 ************************************ 00:06:25.354 END TEST version 00:06:25.354 ************************************ 00:06:25.615 23:48:25 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:25 -- spdk/autotest.sh@194 -- # uname -s 00:06:25.615 23:48:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:25.615 23:48:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.615 23:48:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.615 23:48:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:25.615 23:48:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.615 23:48:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.615 23:48:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:26 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:26 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:25.615 23:48:26 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:25.615 23:48:26 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:25.615 23:48:26 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:25.615 23:48:26 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:25.615 23:48:26 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:25.615 23:48:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.615 23:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.615 ************************************ 00:06:25.615 START TEST nvmf_tcp 00:06:25.615 ************************************ 00:06:25.615 23:48:26 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:25.615 * Looking for test storage... 00:06:25.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.615 23:48:26 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.615 23:48:26 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.615 23:48:26 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.615 23:48:26 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.615 23:48:26 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.615 23:48:26 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.615 23:48:26 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:25.615 23:48:26 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:25.615 23:48:26 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:25.615 23:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:25.615 23:48:26 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:25.615 23:48:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:25.616 23:48:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.616 23:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.876 ************************************ 00:06:25.876 START TEST nvmf_example 00:06:25.876 ************************************ 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:25.876 * Looking for test storage... 00:06:25.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:25.876 23:48:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.449 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:32.450 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:32.450 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:32.450 Found net devices under 0000:af:00.0: cvl_0_0 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:32.450 Found net devices under 0000:af:00.1: cvl_0_1 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.450 23:48:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.450 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.450 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:32.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:06:32.709 00:06:32.709 --- 10.0.0.2 ping statistics --- 00:06:32.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.709 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:06:32.709 00:06:32.709 --- 10.0.0.1 ping statistics --- 00:06:32.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.709 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3422082 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3422082 00:06:32.709 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3422082 ']' 00:06:32.710 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.710 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.710 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.710 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.710 23:48:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:33.658 23:48:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:33.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.870 Initializing NVMe Controllers 00:06:45.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:45.870 Initialization complete. Launching workers. 00:06:45.870 ======================================================== 00:06:45.870 Latency(us) 00:06:45.870 Device Information : IOPS MiB/s Average min max 00:06:45.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14339.12 56.01 4464.06 689.36 16391.14 00:06:45.870 ======================================================== 00:06:45.870 Total : 14339.12 56.01 4464.06 689.36 16391.14 00:06:45.870 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.870 rmmod nvme_tcp 00:06:45.870 rmmod nvme_fabrics 00:06:45.870 rmmod nvme_keyring 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3422082 ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3422082 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3422082 ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3422082 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3422082 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3422082' 00:06:45.870 killing process with pid 3422082 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3422082 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3422082 00:06:45.870 nvmf threads initialize successfully 00:06:45.870 bdev subsystem init successfully 00:06:45.870 created a nvmf target service 00:06:45.870 create targets's poll groups done 00:06:45.870 all subsystems of target started 00:06:45.870 nvmf target is running 00:06:45.870 all subsystems of target stopped 00:06:45.870 destroy targets's poll groups done 00:06:45.870 destroyed the nvmf target service 00:06:45.870 bdev subsystem finish successfully 00:06:45.870 nvmf threads destroy successfully 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.870 23:48:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.129 23:48:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:46.129 23:48:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:46.129 23:48:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.129 23:48:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 00:06:46.389 real 0m20.503s 00:06:46.389 user 0m45.240s 00:06:46.389 sys 0m7.304s 00:06:46.389 23:48:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.389 23:48:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 ************************************ 00:06:46.389 END TEST nvmf_example 00:06:46.389 ************************************ 00:06:46.389 23:48:46 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:46.389 23:48:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:46.389 23:48:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.389 23:48:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 ************************************ 00:06:46.389 START TEST nvmf_filesystem 00:06:46.389 ************************************ 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:46.389 * Looking for test storage... 00:06:46.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:46.389 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:46.390 #define SPDK_CONFIG_H 00:06:46.390 #define SPDK_CONFIG_APPS 1 00:06:46.390 #define SPDK_CONFIG_ARCH native 00:06:46.390 #undef SPDK_CONFIG_ASAN 00:06:46.390 #undef SPDK_CONFIG_AVAHI 00:06:46.390 #undef SPDK_CONFIG_CET 00:06:46.390 #define SPDK_CONFIG_COVERAGE 1 00:06:46.390 #define SPDK_CONFIG_CROSS_PREFIX 00:06:46.390 #undef SPDK_CONFIG_CRYPTO 00:06:46.390 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:46.390 #undef SPDK_CONFIG_CUSTOMOCF 00:06:46.390 #undef SPDK_CONFIG_DAOS 00:06:46.390 #define SPDK_CONFIG_DAOS_DIR 00:06:46.390 #define SPDK_CONFIG_DEBUG 1 00:06:46.390 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:46.390 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:46.390 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:46.390 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:46.390 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:46.390 #undef SPDK_CONFIG_DPDK_UADK 00:06:46.390 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:46.390 #define SPDK_CONFIG_EXAMPLES 1 00:06:46.390 #undef SPDK_CONFIG_FC 00:06:46.390 #define SPDK_CONFIG_FC_PATH 00:06:46.390 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:46.390 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:46.390 #undef SPDK_CONFIG_FUSE 00:06:46.390 #undef SPDK_CONFIG_FUZZER 00:06:46.390 #define SPDK_CONFIG_FUZZER_LIB 00:06:46.390 #undef SPDK_CONFIG_GOLANG 00:06:46.390 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:46.390 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:46.390 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:46.390 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:46.390 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:46.390 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:46.390 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:46.390 #define SPDK_CONFIG_IDXD 1 00:06:46.390 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:46.390 #undef SPDK_CONFIG_IPSEC_MB 00:06:46.390 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:46.390 #define SPDK_CONFIG_ISAL 1 00:06:46.390 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:46.390 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:46.390 #define SPDK_CONFIG_LIBDIR 00:06:46.390 #undef SPDK_CONFIG_LTO 00:06:46.390 #define SPDK_CONFIG_MAX_LCORES 00:06:46.390 #define SPDK_CONFIG_NVME_CUSE 1 00:06:46.390 #undef SPDK_CONFIG_OCF 00:06:46.390 #define SPDK_CONFIG_OCF_PATH 00:06:46.390 #define SPDK_CONFIG_OPENSSL_PATH 00:06:46.390 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:46.390 #define SPDK_CONFIG_PGO_DIR 00:06:46.390 #undef SPDK_CONFIG_PGO_USE 00:06:46.390 #define SPDK_CONFIG_PREFIX /usr/local 00:06:46.390 #undef SPDK_CONFIG_RAID5F 00:06:46.390 #undef SPDK_CONFIG_RBD 00:06:46.390 #define SPDK_CONFIG_RDMA 1 00:06:46.390 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:46.390 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:46.390 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:46.390 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:46.390 #define SPDK_CONFIG_SHARED 1 00:06:46.390 #undef SPDK_CONFIG_SMA 00:06:46.390 #define SPDK_CONFIG_TESTS 1 00:06:46.390 #undef SPDK_CONFIG_TSAN 00:06:46.390 #define SPDK_CONFIG_UBLK 1 00:06:46.390 #define SPDK_CONFIG_UBSAN 1 00:06:46.390 #undef SPDK_CONFIG_UNIT_TESTS 00:06:46.390 #undef SPDK_CONFIG_URING 00:06:46.390 #define SPDK_CONFIG_URING_PATH 00:06:46.390 #undef SPDK_CONFIG_URING_ZNS 00:06:46.390 #undef SPDK_CONFIG_USDT 00:06:46.390 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:46.390 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:46.390 #define SPDK_CONFIG_VFIO_USER 1 00:06:46.390 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:46.390 #define SPDK_CONFIG_VHOST 1 00:06:46.390 #define SPDK_CONFIG_VIRTIO 1 00:06:46.390 #undef SPDK_CONFIG_VTUNE 00:06:46.390 #define SPDK_CONFIG_VTUNE_DIR 00:06:46.390 #define SPDK_CONFIG_WERROR 1 00:06:46.390 #define SPDK_CONFIG_WPDK_DIR 00:06:46.390 #undef SPDK_CONFIG_XNVME 00:06:46.390 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:46.390 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.652 23:48:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:46.653 23:48:46 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:46.653 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3424522 ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3424522 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.XJv5Re 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XJv5Re/tests/target /tmp/spdk.XJv5Re 00:06:46.654 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972992512 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311437312 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52241326080 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742292992 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9500966912 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30867771392 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12339077120 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9383936 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30869868544 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1277952 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:46.655 * Looking for test storage... 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52241326080 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11715559424 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.655 23:48:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.656 23:48:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:53.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:53.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:53.271 Found net devices under 0000:af:00.0: cvl_0_0 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:53.271 Found net devices under 0000:af:00.1: cvl_0_1 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.271 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:06:53.530 00:06:53.530 --- 10.0.0.2 ping statistics --- 00:06:53.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.530 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:06:53.530 00:06:53.530 --- 10.0.0.1 ping statistics --- 00:06:53.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.530 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.530 23:48:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.530 ************************************ 00:06:53.530 START TEST nvmf_filesystem_no_in_capsule 00:06:53.530 ************************************ 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3427818 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3427818 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3427818 ']' 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.530 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.790 [2024-05-14 23:48:54.135047] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:06:53.790 [2024-05-14 23:48:54.135097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.790 [2024-05-14 23:48:54.210185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.790 [2024-05-14 23:48:54.288081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.790 [2024-05-14 23:48:54.288117] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.790 [2024-05-14 23:48:54.288127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.790 [2024-05-14 23:48:54.288136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.790 [2024-05-14 23:48:54.288143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.790 [2024-05-14 23:48:54.288188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.790 [2024-05-14 23:48:54.288285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.790 [2024-05-14 23:48:54.288308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.790 [2024-05-14 23:48:54.288310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.359 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.359 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:54.359 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.359 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.359 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.620 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:54.620 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:54.620 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 [2024-05-14 23:48:54.997990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 [2024-05-14 23:48:55.147306] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:54.620 [2024-05-14 23:48:55.147582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:54.620 { 00:06:54.620 "name": "Malloc1", 00:06:54.620 "aliases": [ 00:06:54.620 "11224ba8-dc63-4ac3-9bfc-5f6316993820" 00:06:54.620 ], 00:06:54.620 "product_name": "Malloc disk", 00:06:54.620 "block_size": 512, 00:06:54.620 "num_blocks": 1048576, 00:06:54.620 "uuid": "11224ba8-dc63-4ac3-9bfc-5f6316993820", 00:06:54.620 "assigned_rate_limits": { 00:06:54.620 "rw_ios_per_sec": 0, 00:06:54.620 "rw_mbytes_per_sec": 0, 00:06:54.620 "r_mbytes_per_sec": 0, 00:06:54.620 "w_mbytes_per_sec": 0 00:06:54.620 }, 00:06:54.620 "claimed": true, 00:06:54.620 "claim_type": "exclusive_write", 00:06:54.620 "zoned": false, 00:06:54.620 "supported_io_types": { 00:06:54.620 "read": true, 00:06:54.620 "write": true, 00:06:54.620 "unmap": true, 00:06:54.620 "write_zeroes": true, 00:06:54.620 "flush": true, 00:06:54.620 "reset": true, 00:06:54.620 "compare": false, 00:06:54.620 "compare_and_write": false, 00:06:54.620 "abort": true, 00:06:54.620 "nvme_admin": false, 00:06:54.620 "nvme_io": false 00:06:54.620 }, 00:06:54.620 "memory_domains": [ 00:06:54.620 { 00:06:54.620 "dma_device_id": "system", 00:06:54.620 "dma_device_type": 1 00:06:54.620 }, 00:06:54.620 { 00:06:54.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.620 "dma_device_type": 2 00:06:54.620 } 00:06:54.620 ], 00:06:54.620 "driver_specific": {} 00:06:54.620 } 00:06:54.620 ]' 00:06:54.620 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:54.880 23:48:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.259 23:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:56.259 23:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:56.259 23:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:56.259 23:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:56.259 23:48:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:58.166 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:58.425 23:48:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:58.994 23:48:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:59.932 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:59.932 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:59.932 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:59.932 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.932 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.191 ************************************ 00:07:00.191 START TEST filesystem_ext4 00:07:00.191 ************************************ 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:00.191 23:49:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:00.191 mke2fs 1.46.5 (30-Dec-2021) 00:07:00.191 Discarding device blocks: 0/522240 done 00:07:00.191 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:00.191 Filesystem UUID: 873453a1-2ab3-4865-8514-994237f736d9 00:07:00.191 Superblock backups stored on blocks: 00:07:00.191 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:00.191 00:07:00.191 Allocating group tables: 0/64 done 00:07:00.191 Writing inode tables: 0/64 done 00:07:00.450 Creating journal (8192 blocks): done 00:07:01.388 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:01.388 00:07:01.388 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:01.388 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.388 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.388 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:01.648 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.648 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:01.648 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:01.648 23:49:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3427818 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.648 00:07:01.648 real 0m1.483s 00:07:01.648 user 0m0.028s 00:07:01.648 sys 0m0.080s 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:01.648 ************************************ 00:07:01.648 END TEST filesystem_ext4 00:07:01.648 ************************************ 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.648 ************************************ 00:07:01.648 START TEST filesystem_btrfs 00:07:01.648 ************************************ 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:01.648 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:02.217 btrfs-progs v6.6.2 00:07:02.217 See https://btrfs.readthedocs.io for more information. 00:07:02.217 00:07:02.217 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:02.217 NOTE: several default settings have changed in version 5.15, please make sure 00:07:02.217 this does not affect your deployments: 00:07:02.217 - DUP for metadata (-m dup) 00:07:02.217 - enabled no-holes (-O no-holes) 00:07:02.217 - enabled free-space-tree (-R free-space-tree) 00:07:02.217 00:07:02.217 Label: (null) 00:07:02.217 UUID: 743efbdf-98b2-4976-bd2b-1cc9afcc6e1f 00:07:02.217 Node size: 16384 00:07:02.217 Sector size: 4096 00:07:02.217 Filesystem size: 510.00MiB 00:07:02.217 Block group profiles: 00:07:02.217 Data: single 8.00MiB 00:07:02.217 Metadata: DUP 32.00MiB 00:07:02.217 System: DUP 8.00MiB 00:07:02.217 SSD detected: yes 00:07:02.217 Zoned device: no 00:07:02.217 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:02.217 Runtime features: free-space-tree 00:07:02.217 Checksum: crc32c 00:07:02.217 Number of devices: 1 00:07:02.217 Devices: 00:07:02.217 ID SIZE PATH 00:07:02.217 1 510.00MiB /dev/nvme0n1p1 00:07:02.217 00:07:02.217 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:02.217 23:49:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3427818 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.155 00:07:03.155 real 0m1.329s 00:07:03.155 user 0m0.032s 00:07:03.155 sys 0m0.142s 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:03.155 ************************************ 00:07:03.155 END TEST filesystem_btrfs 00:07:03.155 ************************************ 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:03.155 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.156 ************************************ 00:07:03.156 START TEST filesystem_xfs 00:07:03.156 ************************************ 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:03.156 23:49:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:03.156 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:03.156 = sectsz=512 attr=2, projid32bit=1 00:07:03.156 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:03.156 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:03.156 data = bsize=4096 blocks=130560, imaxpct=25 00:07:03.156 = sunit=0 swidth=0 blks 00:07:03.156 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:03.156 log =internal log bsize=4096 blocks=16384, version=2 00:07:03.156 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:03.156 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:04.094 Discarding blocks...Done. 00:07:04.095 23:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:04.095 23:49:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.001 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3427818 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:06.261 00:07:06.261 real 0m3.112s 00:07:06.261 user 0m0.034s 00:07:06.261 sys 0m0.079s 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:06.261 ************************************ 00:07:06.261 END TEST filesystem_xfs 00:07:06.261 ************************************ 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:06.261 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:06.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3427818 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3427818 ']' 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3427818 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.521 23:49:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3427818 00:07:06.521 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.521 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.521 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3427818' 00:07:06.521 killing process with pid 3427818 00:07:06.521 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3427818 00:07:06.521 [2024-05-14 23:49:07.021998] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:06.521 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3427818 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:07.090 00:07:07.090 real 0m13.309s 00:07:07.090 user 0m51.809s 00:07:07.090 sys 0m1.840s 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 ************************************ 00:07:07.090 END TEST nvmf_filesystem_no_in_capsule 00:07:07.090 ************************************ 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 ************************************ 00:07:07.090 START TEST nvmf_filesystem_in_capsule 00:07:07.090 ************************************ 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3430427 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3430427 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3430427 ']' 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.090 23:49:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 [2024-05-14 23:49:07.534081] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:07.090 [2024-05-14 23:49:07.534123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.090 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.090 [2024-05-14 23:49:07.607402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.090 [2024-05-14 23:49:07.675402] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.090 [2024-05-14 23:49:07.675445] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.090 [2024-05-14 23:49:07.675455] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.090 [2024-05-14 23:49:07.675464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.090 [2024-05-14 23:49:07.675471] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.090 [2024-05-14 23:49:07.675518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.090 [2024-05-14 23:49:07.675638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.090 [2024-05-14 23:49:07.675741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.090 [2024-05-14 23:49:07.675742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 [2024-05-14 23:49:08.392058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 [2024-05-14 23:49:08.541686] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:08.031 [2024-05-14 23:49:08.541957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:08.031 { 00:07:08.031 "name": "Malloc1", 00:07:08.031 "aliases": [ 00:07:08.031 "d69dee0a-be8f-459f-9374-130b33ee1032" 00:07:08.031 ], 00:07:08.031 "product_name": "Malloc disk", 00:07:08.031 "block_size": 512, 00:07:08.031 "num_blocks": 1048576, 00:07:08.031 "uuid": "d69dee0a-be8f-459f-9374-130b33ee1032", 00:07:08.031 "assigned_rate_limits": { 00:07:08.031 "rw_ios_per_sec": 0, 00:07:08.031 "rw_mbytes_per_sec": 0, 00:07:08.031 "r_mbytes_per_sec": 0, 00:07:08.031 "w_mbytes_per_sec": 0 00:07:08.031 }, 00:07:08.031 "claimed": true, 00:07:08.031 "claim_type": "exclusive_write", 00:07:08.031 "zoned": false, 00:07:08.031 "supported_io_types": { 00:07:08.031 "read": true, 00:07:08.031 "write": true, 00:07:08.031 "unmap": true, 00:07:08.031 "write_zeroes": true, 00:07:08.031 "flush": true, 00:07:08.031 "reset": true, 00:07:08.031 "compare": false, 00:07:08.031 "compare_and_write": false, 00:07:08.031 "abort": true, 00:07:08.031 "nvme_admin": false, 00:07:08.031 "nvme_io": false 00:07:08.031 }, 00:07:08.031 "memory_domains": [ 00:07:08.031 { 00:07:08.031 "dma_device_id": "system", 00:07:08.031 "dma_device_type": 1 00:07:08.031 }, 00:07:08.031 { 00:07:08.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.031 "dma_device_type": 2 00:07:08.031 } 00:07:08.031 ], 00:07:08.031 "driver_specific": {} 00:07:08.031 } 00:07:08.031 ]' 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:08.031 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:08.328 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:08.328 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:08.328 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:08.328 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:08.328 23:49:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.737 23:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.737 23:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:09.737 23:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.737 23:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:09.737 23:49:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:11.640 23:49:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:11.640 23:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:11.640 23:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:11.899 23:49:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:12.836 23:49:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.773 ************************************ 00:07:13.773 START TEST filesystem_in_capsule_ext4 00:07:13.773 ************************************ 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:13.773 23:49:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:13.773 mke2fs 1.46.5 (30-Dec-2021) 00:07:13.773 Discarding device blocks: 0/522240 done 00:07:14.031 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:14.031 Filesystem UUID: 1e6deacb-7e09-40d6-9ee5-ff8592d6e432 00:07:14.031 Superblock backups stored on blocks: 00:07:14.031 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:14.031 00:07:14.031 Allocating group tables: 0/64 done 00:07:14.031 Writing inode tables: 0/64 done 00:07:14.599 Creating journal (8192 blocks): done 00:07:14.599 Writing superblocks and filesystem accounting information: 0/64 done 00:07:14.599 00:07:14.599 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:14.599 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3430427 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.166 00:07:15.166 real 0m1.399s 00:07:15.166 user 0m0.034s 00:07:15.166 sys 0m0.070s 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:15.166 ************************************ 00:07:15.166 END TEST filesystem_in_capsule_ext4 00:07:15.166 ************************************ 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.166 ************************************ 00:07:15.166 START TEST filesystem_in_capsule_btrfs 00:07:15.166 ************************************ 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:15.166 23:49:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:15.734 btrfs-progs v6.6.2 00:07:15.734 See https://btrfs.readthedocs.io for more information. 00:07:15.734 00:07:15.734 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:15.734 NOTE: several default settings have changed in version 5.15, please make sure 00:07:15.734 this does not affect your deployments: 00:07:15.734 - DUP for metadata (-m dup) 00:07:15.734 - enabled no-holes (-O no-holes) 00:07:15.734 - enabled free-space-tree (-R free-space-tree) 00:07:15.734 00:07:15.734 Label: (null) 00:07:15.734 UUID: f5bc7818-9d2b-4e65-9cae-daa2079fe36f 00:07:15.734 Node size: 16384 00:07:15.734 Sector size: 4096 00:07:15.734 Filesystem size: 510.00MiB 00:07:15.734 Block group profiles: 00:07:15.734 Data: single 8.00MiB 00:07:15.734 Metadata: DUP 32.00MiB 00:07:15.734 System: DUP 8.00MiB 00:07:15.734 SSD detected: yes 00:07:15.734 Zoned device: no 00:07:15.735 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:15.735 Runtime features: free-space-tree 00:07:15.735 Checksum: crc32c 00:07:15.735 Number of devices: 1 00:07:15.735 Devices: 00:07:15.735 ID SIZE PATH 00:07:15.735 1 510.00MiB /dev/nvme0n1p1 00:07:15.735 00:07:15.735 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:15.735 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3430427 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.303 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.563 00:07:16.563 real 0m1.152s 00:07:16.563 user 0m0.035s 00:07:16.563 sys 0m0.135s 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.563 ************************************ 00:07:16.563 END TEST filesystem_in_capsule_btrfs 00:07:16.563 ************************************ 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.563 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.563 ************************************ 00:07:16.563 START TEST filesystem_in_capsule_xfs 00:07:16.563 ************************************ 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:16.564 23:49:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:16.564 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:16.564 = sectsz=512 attr=2, projid32bit=1 00:07:16.564 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:16.564 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:16.564 data = bsize=4096 blocks=130560, imaxpct=25 00:07:16.564 = sunit=0 swidth=0 blks 00:07:16.564 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:16.564 log =internal log bsize=4096 blocks=16384, version=2 00:07:16.564 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:16.564 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:17.501 Discarding blocks...Done. 00:07:17.501 23:49:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:17.501 23:49:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3430427 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.412 00:07:19.412 real 0m2.974s 00:07:19.412 user 0m0.031s 00:07:19.412 sys 0m0.082s 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.412 23:49:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.412 ************************************ 00:07:19.412 END TEST filesystem_in_capsule_xfs 00:07:19.412 ************************************ 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3430427 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3430427 ']' 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3430427 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.671 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3430427 00:07:19.930 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.930 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.930 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3430427' 00:07:19.930 killing process with pid 3430427 00:07:19.930 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3430427 00:07:19.930 [2024-05-14 23:49:20.282930] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:19.930 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3430427 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.189 00:07:20.189 real 0m13.170s 00:07:20.189 user 0m51.319s 00:07:20.189 sys 0m1.841s 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.189 ************************************ 00:07:20.189 END TEST nvmf_filesystem_in_capsule 00:07:20.189 ************************************ 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.189 rmmod nvme_tcp 00:07:20.189 rmmod nvme_fabrics 00:07:20.189 rmmod nvme_keyring 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.189 23:49:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.190 23:49:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.190 23:49:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.727 23:49:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.727 00:07:22.727 real 0m35.998s 00:07:22.727 user 1m45.164s 00:07:22.727 sys 0m9.182s 00:07:22.727 23:49:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.727 23:49:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.727 ************************************ 00:07:22.727 END TEST nvmf_filesystem 00:07:22.727 ************************************ 00:07:22.727 23:49:22 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:22.727 23:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:22.727 23:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.727 23:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.727 ************************************ 00:07:22.727 START TEST nvmf_target_discovery 00:07:22.727 ************************************ 00:07:22.727 23:49:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:22.727 * Looking for test storage... 00:07:22.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:22.727 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.728 23:49:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:29.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:29.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:29.378 Found net devices under 0000:af:00.0: cvl_0_0 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:29.378 Found net devices under 0000:af:00.1: cvl_0_1 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.378 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:07:29.379 00:07:29.379 --- 10.0.0.2 ping statistics --- 00:07:29.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.379 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:07:29.379 00:07:29.379 --- 10.0.0.1 ping statistics --- 00:07:29.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.379 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3436461 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3436461 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3436461 ']' 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.379 23:49:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.379 [2024-05-14 23:49:29.805611] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:29.379 [2024-05-14 23:49:29.805657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.379 [2024-05-14 23:49:29.877999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.379 [2024-05-14 23:49:29.952294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.379 [2024-05-14 23:49:29.952329] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.379 [2024-05-14 23:49:29.952338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.379 [2024-05-14 23:49:29.952346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.379 [2024-05-14 23:49:29.952353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.379 [2024-05-14 23:49:29.952398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.379 [2024-05-14 23:49:29.952494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.379 [2024-05-14 23:49:29.952555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.379 [2024-05-14 23:49:29.952557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 [2024-05-14 23:49:30.667136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 Null1 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 [2024-05-14 23:49:30.719257] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.319 [2024-05-14 23:49:30.719474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 Null2 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 Null3 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 Null4 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.319 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:30.579 00:07:30.579 Discovery Log Number of Records 6, Generation counter 6 00:07:30.579 =====Discovery Log Entry 0====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: current discovery subsystem 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4420 00:07:30.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: explicit discovery connections, duplicate discovery information 00:07:30.579 sectype: none 00:07:30.579 =====Discovery Log Entry 1====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: nvme subsystem 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4420 00:07:30.579 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: none 00:07:30.579 sectype: none 00:07:30.579 =====Discovery Log Entry 2====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: nvme subsystem 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4420 00:07:30.579 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: none 00:07:30.579 sectype: none 00:07:30.579 =====Discovery Log Entry 3====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: nvme subsystem 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4420 00:07:30.579 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: none 00:07:30.579 sectype: none 00:07:30.579 =====Discovery Log Entry 4====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: nvme subsystem 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4420 00:07:30.579 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: none 00:07:30.579 sectype: none 00:07:30.579 =====Discovery Log Entry 5====== 00:07:30.579 trtype: tcp 00:07:30.579 adrfam: ipv4 00:07:30.579 subtype: discovery subsystem referral 00:07:30.579 treq: not required 00:07:30.579 portid: 0 00:07:30.579 trsvcid: 4430 00:07:30.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:30.579 traddr: 10.0.0.2 00:07:30.579 eflags: none 00:07:30.579 sectype: none 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:30.579 Perform nvmf subsystem discovery via RPC 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.579 [ 00:07:30.579 { 00:07:30.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:30.579 "subtype": "Discovery", 00:07:30.579 "listen_addresses": [ 00:07:30.579 { 00:07:30.579 "trtype": "TCP", 00:07:30.579 "adrfam": "IPv4", 00:07:30.579 "traddr": "10.0.0.2", 00:07:30.579 "trsvcid": "4420" 00:07:30.579 } 00:07:30.579 ], 00:07:30.579 "allow_any_host": true, 00:07:30.579 "hosts": [] 00:07:30.579 }, 00:07:30.579 { 00:07:30.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.579 "subtype": "NVMe", 00:07:30.579 "listen_addresses": [ 00:07:30.579 { 00:07:30.579 "trtype": "TCP", 00:07:30.579 "adrfam": "IPv4", 00:07:30.579 "traddr": "10.0.0.2", 00:07:30.579 "trsvcid": "4420" 00:07:30.579 } 00:07:30.579 ], 00:07:30.579 "allow_any_host": true, 00:07:30.579 "hosts": [], 00:07:30.579 "serial_number": "SPDK00000000000001", 00:07:30.579 "model_number": "SPDK bdev Controller", 00:07:30.579 "max_namespaces": 32, 00:07:30.579 "min_cntlid": 1, 00:07:30.579 "max_cntlid": 65519, 00:07:30.579 "namespaces": [ 00:07:30.579 { 00:07:30.579 "nsid": 1, 00:07:30.579 "bdev_name": "Null1", 00:07:30.579 "name": "Null1", 00:07:30.579 "nguid": "D0FCFAEAB3284C2CB62D2F6C88977075", 00:07:30.579 "uuid": "d0fcfaea-b328-4c2c-b62d-2f6c88977075" 00:07:30.579 } 00:07:30.579 ] 00:07:30.579 }, 00:07:30.579 { 00:07:30.579 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:30.579 "subtype": "NVMe", 00:07:30.579 "listen_addresses": [ 00:07:30.579 { 00:07:30.579 "trtype": "TCP", 00:07:30.579 "adrfam": "IPv4", 00:07:30.579 "traddr": "10.0.0.2", 00:07:30.579 "trsvcid": "4420" 00:07:30.579 } 00:07:30.579 ], 00:07:30.579 "allow_any_host": true, 00:07:30.579 "hosts": [], 00:07:30.579 "serial_number": "SPDK00000000000002", 00:07:30.579 "model_number": "SPDK bdev Controller", 00:07:30.579 "max_namespaces": 32, 00:07:30.579 "min_cntlid": 1, 00:07:30.579 "max_cntlid": 65519, 00:07:30.579 "namespaces": [ 00:07:30.579 { 00:07:30.579 "nsid": 1, 00:07:30.579 "bdev_name": "Null2", 00:07:30.579 "name": "Null2", 00:07:30.579 "nguid": "B5B715AE9BB34328BADF5CE56A91C092", 00:07:30.579 "uuid": "b5b715ae-9bb3-4328-badf-5ce56a91c092" 00:07:30.579 } 00:07:30.579 ] 00:07:30.579 }, 00:07:30.579 { 00:07:30.579 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:30.579 "subtype": "NVMe", 00:07:30.579 "listen_addresses": [ 00:07:30.579 { 00:07:30.579 "trtype": "TCP", 00:07:30.579 "adrfam": "IPv4", 00:07:30.579 "traddr": "10.0.0.2", 00:07:30.579 "trsvcid": "4420" 00:07:30.579 } 00:07:30.579 ], 00:07:30.579 "allow_any_host": true, 00:07:30.579 "hosts": [], 00:07:30.579 "serial_number": "SPDK00000000000003", 00:07:30.579 "model_number": "SPDK bdev Controller", 00:07:30.579 "max_namespaces": 32, 00:07:30.579 "min_cntlid": 1, 00:07:30.579 "max_cntlid": 65519, 00:07:30.579 "namespaces": [ 00:07:30.579 { 00:07:30.579 "nsid": 1, 00:07:30.579 "bdev_name": "Null3", 00:07:30.579 "name": "Null3", 00:07:30.579 "nguid": "B08B6876797F49F786F7EAFBF1B90442", 00:07:30.579 "uuid": "b08b6876-797f-49f7-86f7-eafbf1b90442" 00:07:30.579 } 00:07:30.579 ] 00:07:30.579 }, 00:07:30.579 { 00:07:30.579 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:30.579 "subtype": "NVMe", 00:07:30.579 "listen_addresses": [ 00:07:30.579 { 00:07:30.579 "trtype": "TCP", 00:07:30.579 "adrfam": "IPv4", 00:07:30.579 "traddr": "10.0.0.2", 00:07:30.579 "trsvcid": "4420" 00:07:30.579 } 00:07:30.579 ], 00:07:30.579 "allow_any_host": true, 00:07:30.579 "hosts": [], 00:07:30.579 "serial_number": "SPDK00000000000004", 00:07:30.579 "model_number": "SPDK bdev Controller", 00:07:30.579 "max_namespaces": 32, 00:07:30.579 "min_cntlid": 1, 00:07:30.579 "max_cntlid": 65519, 00:07:30.579 "namespaces": [ 00:07:30.579 { 00:07:30.579 "nsid": 1, 00:07:30.579 "bdev_name": "Null4", 00:07:30.579 "name": "Null4", 00:07:30.579 "nguid": "EDF0329D5063442B9DC4538A2E991D1D", 00:07:30.579 "uuid": "edf0329d-5063-442b-9dc4-538a2e991d1d" 00:07:30.579 } 00:07:30.579 ] 00:07:30.579 } 00:07:30.579 ] 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.579 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.580 rmmod nvme_tcp 00:07:30.580 rmmod nvme_fabrics 00:07:30.580 rmmod nvme_keyring 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3436461 ']' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3436461 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3436461 ']' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3436461 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.580 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3436461 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3436461' 00:07:30.840 killing process with pid 3436461 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3436461 00:07:30.840 [2024-05-14 23:49:31.213533] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3436461 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.840 23:49:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.379 23:49:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.379 00:07:33.379 real 0m10.575s 00:07:33.379 user 0m7.816s 00:07:33.379 sys 0m5.495s 00:07:33.380 23:49:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.380 23:49:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.380 ************************************ 00:07:33.380 END TEST nvmf_target_discovery 00:07:33.380 ************************************ 00:07:33.380 23:49:33 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:33.380 23:49:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:33.380 23:49:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.380 23:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.380 ************************************ 00:07:33.380 START TEST nvmf_referrals 00:07:33.380 ************************************ 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:33.380 * Looking for test storage... 00:07:33.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.380 23:49:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.951 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:39.952 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:39.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:39.952 Found net devices under 0000:af:00.0: cvl_0_0 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:39.952 Found net devices under 0000:af:00.1: cvl_0_1 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:39.952 00:07:39.952 --- 10.0.0.2 ping statistics --- 00:07:39.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.952 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:07:39.952 00:07:39.952 --- 10.0.0.1 ping statistics --- 00:07:39.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.952 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.952 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3440457 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3440457 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3440457 ']' 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.212 23:49:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 [2024-05-14 23:49:40.621587] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:40.212 [2024-05-14 23:49:40.621633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.212 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.212 [2024-05-14 23:49:40.696061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.212 [2024-05-14 23:49:40.770724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.212 [2024-05-14 23:49:40.770763] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.212 [2024-05-14 23:49:40.770773] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.212 [2024-05-14 23:49:40.770781] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.212 [2024-05-14 23:49:40.770788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.212 [2024-05-14 23:49:40.770840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.212 [2024-05-14 23:49:40.770933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.212 [2024-05-14 23:49:40.770952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.212 [2024-05-14 23:49:40.770953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 [2024-05-14 23:49:41.478996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 [2024-05-14 23:49:41.495000] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:41.150 [2024-05-14 23:49:41.495221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.150 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.408 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.409 23:49:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.666 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.925 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:42.184 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:42.185 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.444 23:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.444 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.704 rmmod nvme_tcp 00:07:42.704 rmmod nvme_fabrics 00:07:42.704 rmmod nvme_keyring 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3440457 ']' 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3440457 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3440457 ']' 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3440457 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3440457 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3440457' 00:07:42.704 killing process with pid 3440457 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3440457 00:07:42.704 [2024-05-14 23:49:43.151392] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:42.704 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3440457 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.963 23:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.869 23:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.869 00:07:44.869 real 0m11.843s 00:07:44.869 user 0m12.846s 00:07:44.869 sys 0m6.001s 00:07:44.869 23:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.869 23:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.869 ************************************ 00:07:44.869 END TEST nvmf_referrals 00:07:44.869 ************************************ 00:07:45.128 23:49:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.128 23:49:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:45.128 23:49:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.128 23:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.128 ************************************ 00:07:45.128 START TEST nvmf_connect_disconnect 00:07:45.128 ************************************ 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.128 * Looking for test storage... 00:07:45.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.128 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.129 23:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.739 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.739 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.739 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.739 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.739 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.740 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.999 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.999 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.999 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:07:51.999 00:07:51.999 --- 10.0.0.2 ping statistics --- 00:07:51.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.999 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:51.999 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:51.999 00:07:51.999 --- 10.0.0.1 ping statistics --- 00:07:51.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.999 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3444610 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3444610 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3444610 ']' 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 23:49:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.000 [2024-05-14 23:49:52.524316] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:07:52.000 [2024-05-14 23:49:52.524363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.259 [2024-05-14 23:49:52.596830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.259 [2024-05-14 23:49:52.676014] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.259 [2024-05-14 23:49:52.676051] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.259 [2024-05-14 23:49:52.676060] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.259 [2024-05-14 23:49:52.676069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.259 [2024-05-14 23:49:52.676076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.259 [2024-05-14 23:49:52.676123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.259 [2024-05-14 23:49:52.676227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.259 [2024-05-14 23:49:52.676293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.259 [2024-05-14 23:49:52.676295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.829 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.829 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:52.829 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.829 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.830 [2024-05-14 23:49:53.372942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.830 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:53.088 [2024-05-14 23:49:53.427508] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:53.088 [2024-05-14 23:49:53.427766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:53.088 23:49:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:56.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.452 rmmod nvme_tcp 00:08:10.452 rmmod nvme_fabrics 00:08:10.452 rmmod nvme_keyring 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3444610 ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3444610 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3444610 ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3444610 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3444610 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3444610' 00:08:10.452 killing process with pid 3444610 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3444610 00:08:10.452 [2024-05-14 23:50:10.775780] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3444610 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.452 23:50:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.452 23:50:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.452 23:50:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.452 23:50:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.063 23:50:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.063 00:08:13.063 real 0m27.546s 00:08:13.063 user 1m13.936s 00:08:13.063 sys 0m7.266s 00:08:13.064 23:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.064 23:50:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 ************************************ 00:08:13.064 END TEST nvmf_connect_disconnect 00:08:13.064 ************************************ 00:08:13.064 23:50:13 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:13.064 23:50:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.064 23:50:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.064 23:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.064 ************************************ 00:08:13.064 START TEST nvmf_multitarget 00:08:13.064 ************************************ 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:13.064 * Looking for test storage... 00:08:13.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.064 23:50:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.634 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:19.635 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:19.635 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:19.635 Found net devices under 0000:af:00.0: cvl_0_0 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:19.635 Found net devices under 0000:af:00.1: cvl_0_1 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.635 23:50:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:19.635 00:08:19.635 --- 10.0.0.2 ping statistics --- 00:08:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.635 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:19.635 00:08:19.635 --- 10.0.0.1 ping statistics --- 00:08:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.635 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3451534 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3451534 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3451534 ']' 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.635 23:50:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:19.895 [2024-05-14 23:50:20.244678] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:19.895 [2024-05-14 23:50:20.244727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.895 [2024-05-14 23:50:20.316863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.895 [2024-05-14 23:50:20.391177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.895 [2024-05-14 23:50:20.391221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.895 [2024-05-14 23:50:20.391231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.895 [2024-05-14 23:50:20.391239] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.895 [2024-05-14 23:50:20.391246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.895 [2024-05-14 23:50:20.391287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.895 [2024-05-14 23:50:20.391381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.895 [2024-05-14 23:50:20.391470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.895 [2024-05-14 23:50:20.391471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:20.830 "nvmf_tgt_1" 00:08:20.830 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:20.830 "nvmf_tgt_2" 00:08:21.089 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:21.089 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:21.089 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:21.089 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:21.089 true 00:08:21.089 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:21.348 true 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.348 rmmod nvme_tcp 00:08:21.348 rmmod nvme_fabrics 00:08:21.348 rmmod nvme_keyring 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3451534 ']' 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3451534 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3451534 ']' 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3451534 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.348 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3451534 00:08:21.607 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.607 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.607 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3451534' 00:08:21.607 killing process with pid 3451534 00:08:21.607 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3451534 00:08:21.607 23:50:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3451534 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.607 23:50:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.141 23:50:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.141 00:08:24.141 real 0m11.068s 00:08:24.141 user 0m9.462s 00:08:24.141 sys 0m5.861s 00:08:24.141 23:50:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.141 23:50:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:24.141 ************************************ 00:08:24.141 END TEST nvmf_multitarget 00:08:24.141 ************************************ 00:08:24.141 23:50:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.141 23:50:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.141 23:50:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.141 23:50:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.141 ************************************ 00:08:24.141 START TEST nvmf_rpc 00:08:24.141 ************************************ 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.141 * Looking for test storage... 00:08:24.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.141 23:50:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.142 23:50:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:30.716 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:30.716 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.716 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:30.717 Found net devices under 0000:af:00.0: cvl_0_0 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:30.717 Found net devices under 0000:af:00.1: cvl_0_1 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.717 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.976 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:30.977 00:08:30.977 --- 10.0.0.2 ping statistics --- 00:08:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.977 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:08:30.977 00:08:30.977 --- 10.0.0.1 ping statistics --- 00:08:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.977 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3455581 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3455581 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3455581 ']' 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.977 23:50:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.977 [2024-05-14 23:50:31.446814] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:08:30.977 [2024-05-14 23:50:31.446864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.977 [2024-05-14 23:50:31.521455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.236 [2024-05-14 23:50:31.600970] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.236 [2024-05-14 23:50:31.601005] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.236 [2024-05-14 23:50:31.601015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.236 [2024-05-14 23:50:31.601024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.236 [2024-05-14 23:50:31.601031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.236 [2024-05-14 23:50:31.601095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.236 [2024-05-14 23:50:31.604207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.236 [2024-05-14 23:50:31.604232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.236 [2024-05-14 23:50:31.608208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:31.806 "tick_rate": 2500000000, 00:08:31.806 "poll_groups": [ 00:08:31.806 { 00:08:31.806 "name": "nvmf_tgt_poll_group_000", 00:08:31.806 "admin_qpairs": 0, 00:08:31.806 "io_qpairs": 0, 00:08:31.806 "current_admin_qpairs": 0, 00:08:31.806 "current_io_qpairs": 0, 00:08:31.806 "pending_bdev_io": 0, 00:08:31.806 "completed_nvme_io": 0, 00:08:31.806 "transports": [] 00:08:31.806 }, 00:08:31.806 { 00:08:31.806 "name": "nvmf_tgt_poll_group_001", 00:08:31.806 "admin_qpairs": 0, 00:08:31.806 "io_qpairs": 0, 00:08:31.806 "current_admin_qpairs": 0, 00:08:31.806 "current_io_qpairs": 0, 00:08:31.806 "pending_bdev_io": 0, 00:08:31.806 "completed_nvme_io": 0, 00:08:31.806 "transports": [] 00:08:31.806 }, 00:08:31.806 { 00:08:31.806 "name": "nvmf_tgt_poll_group_002", 00:08:31.806 "admin_qpairs": 0, 00:08:31.806 "io_qpairs": 0, 00:08:31.806 "current_admin_qpairs": 0, 00:08:31.806 "current_io_qpairs": 0, 00:08:31.806 "pending_bdev_io": 0, 00:08:31.806 "completed_nvme_io": 0, 00:08:31.806 "transports": [] 00:08:31.806 }, 00:08:31.806 { 00:08:31.806 "name": "nvmf_tgt_poll_group_003", 00:08:31.806 "admin_qpairs": 0, 00:08:31.806 "io_qpairs": 0, 00:08:31.806 "current_admin_qpairs": 0, 00:08:31.806 "current_io_qpairs": 0, 00:08:31.806 "pending_bdev_io": 0, 00:08:31.806 "completed_nvme_io": 0, 00:08:31.806 "transports": [] 00:08:31.806 } 00:08:31.806 ] 00:08:31.806 }' 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:31.806 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 [2024-05-14 23:50:32.419293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:32.066 "tick_rate": 2500000000, 00:08:32.066 "poll_groups": [ 00:08:32.066 { 00:08:32.066 "name": "nvmf_tgt_poll_group_000", 00:08:32.066 "admin_qpairs": 0, 00:08:32.066 "io_qpairs": 0, 00:08:32.066 "current_admin_qpairs": 0, 00:08:32.066 "current_io_qpairs": 0, 00:08:32.066 "pending_bdev_io": 0, 00:08:32.066 "completed_nvme_io": 0, 00:08:32.066 "transports": [ 00:08:32.066 { 00:08:32.066 "trtype": "TCP" 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 }, 00:08:32.066 { 00:08:32.066 "name": "nvmf_tgt_poll_group_001", 00:08:32.066 "admin_qpairs": 0, 00:08:32.066 "io_qpairs": 0, 00:08:32.066 "current_admin_qpairs": 0, 00:08:32.066 "current_io_qpairs": 0, 00:08:32.066 "pending_bdev_io": 0, 00:08:32.066 "completed_nvme_io": 0, 00:08:32.066 "transports": [ 00:08:32.066 { 00:08:32.066 "trtype": "TCP" 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 }, 00:08:32.066 { 00:08:32.066 "name": "nvmf_tgt_poll_group_002", 00:08:32.066 "admin_qpairs": 0, 00:08:32.066 "io_qpairs": 0, 00:08:32.066 "current_admin_qpairs": 0, 00:08:32.066 "current_io_qpairs": 0, 00:08:32.066 "pending_bdev_io": 0, 00:08:32.066 "completed_nvme_io": 0, 00:08:32.066 "transports": [ 00:08:32.066 { 00:08:32.066 "trtype": "TCP" 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 }, 00:08:32.066 { 00:08:32.066 "name": "nvmf_tgt_poll_group_003", 00:08:32.066 "admin_qpairs": 0, 00:08:32.066 "io_qpairs": 0, 00:08:32.066 "current_admin_qpairs": 0, 00:08:32.066 "current_io_qpairs": 0, 00:08:32.066 "pending_bdev_io": 0, 00:08:32.066 "completed_nvme_io": 0, 00:08:32.066 "transports": [ 00:08:32.066 { 00:08:32.066 "trtype": "TCP" 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 }' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 Malloc1 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.066 [2024-05-14 23:50:32.598112] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:32.066 [2024-05-14 23:50:32.598364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.066 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:32.067 [2024-05-14 23:50:32.627153] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:32.067 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:32.067 could not add new controller: failed to write to nvme-fabrics device 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.067 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 23:50:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.326 23:50:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:33.705 23:50:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.705 23:50:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:33.705 23:50:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.705 23:50:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:33.705 23:50:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:35.648 23:50:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:35.648 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.648 [2024-05-14 23:50:36.124163] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:35.648 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:35.649 could not add new controller: failed to write to nvme-fabrics device 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.649 23:50:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.028 23:50:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.028 23:50:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:37.028 23:50:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.028 23:50:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:37.028 23:50:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:38.935 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.195 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 [2024-05-14 23:50:39.798439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.455 23:50:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.835 23:50:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:40.835 23:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:40.835 23:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:40.835 23:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:40.835 23:50:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.742 [2024-05-14 23:50:43.317963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.742 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.000 23:50:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.000 23:50:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.380 23:50:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.380 23:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:44.380 23:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.380 23:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:44.380 23:50:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.287 [2024-05-14 23:50:46.870688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.287 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.546 23:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.922 23:50:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.922 23:50:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:47.922 23:50:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.922 23:50:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:47.922 23:50:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:49.825 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 [2024-05-14 23:50:50.463120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.084 23:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.460 23:50:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.460 23:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:51.460 23:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.460 23:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:51.460 23:50:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.376 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.377 [2024-05-14 23:50:53.940805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.377 23:50:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.755 23:50:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.755 23:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:54.755 23:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.755 23:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:54.755 23:50:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:56.662 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:56.662 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:56.662 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 [2024-05-14 23:50:57.452340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 [2024-05-14 23:50:57.500429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.923 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 [2024-05-14 23:50:57.552571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 [2024-05-14 23:50:57.600748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.183 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 [2024-05-14 23:50:57.648903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:57.184 "tick_rate": 2500000000, 00:08:57.184 "poll_groups": [ 00:08:57.184 { 00:08:57.184 "name": "nvmf_tgt_poll_group_000", 00:08:57.184 "admin_qpairs": 2, 00:08:57.184 "io_qpairs": 196, 00:08:57.184 "current_admin_qpairs": 0, 00:08:57.184 "current_io_qpairs": 0, 00:08:57.184 "pending_bdev_io": 0, 00:08:57.184 "completed_nvme_io": 342, 00:08:57.184 "transports": [ 00:08:57.184 { 00:08:57.184 "trtype": "TCP" 00:08:57.184 } 00:08:57.184 ] 00:08:57.184 }, 00:08:57.184 { 00:08:57.184 "name": "nvmf_tgt_poll_group_001", 00:08:57.184 "admin_qpairs": 2, 00:08:57.184 "io_qpairs": 196, 00:08:57.184 "current_admin_qpairs": 0, 00:08:57.184 "current_io_qpairs": 0, 00:08:57.184 "pending_bdev_io": 0, 00:08:57.184 "completed_nvme_io": 346, 00:08:57.184 "transports": [ 00:08:57.184 { 00:08:57.184 "trtype": "TCP" 00:08:57.184 } 00:08:57.184 ] 00:08:57.184 }, 00:08:57.184 { 00:08:57.184 "name": "nvmf_tgt_poll_group_002", 00:08:57.184 "admin_qpairs": 1, 00:08:57.184 "io_qpairs": 196, 00:08:57.184 "current_admin_qpairs": 0, 00:08:57.184 "current_io_qpairs": 0, 00:08:57.184 "pending_bdev_io": 0, 00:08:57.184 "completed_nvme_io": 248, 00:08:57.184 "transports": [ 00:08:57.184 { 00:08:57.184 "trtype": "TCP" 00:08:57.184 } 00:08:57.184 ] 00:08:57.184 }, 00:08:57.184 { 00:08:57.184 "name": "nvmf_tgt_poll_group_003", 00:08:57.184 "admin_qpairs": 2, 00:08:57.184 "io_qpairs": 196, 00:08:57.184 "current_admin_qpairs": 0, 00:08:57.184 "current_io_qpairs": 0, 00:08:57.184 "pending_bdev_io": 0, 00:08:57.184 "completed_nvme_io": 198, 00:08:57.184 "transports": [ 00:08:57.184 { 00:08:57.184 "trtype": "TCP" 00:08:57.184 } 00:08:57.184 ] 00:08:57.184 } 00:08:57.184 ] 00:08:57.184 }' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:57.184 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:57.443 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.444 rmmod nvme_tcp 00:08:57.444 rmmod nvme_fabrics 00:08:57.444 rmmod nvme_keyring 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3455581 ']' 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3455581 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3455581 ']' 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3455581 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3455581 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3455581' 00:08:57.444 killing process with pid 3455581 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3455581 00:08:57.444 [2024-05-14 23:50:57.925045] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:57.444 23:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3455581 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.703 23:50:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.241 23:51:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.241 00:09:00.241 real 0m35.893s 00:09:00.241 user 1m46.834s 00:09:00.241 sys 0m8.231s 00:09:00.241 23:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:00.241 23:51:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.241 ************************************ 00:09:00.241 END TEST nvmf_rpc 00:09:00.241 ************************************ 00:09:00.241 23:51:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:00.241 23:51:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:00.241 23:51:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.241 23:51:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.241 ************************************ 00:09:00.241 START TEST nvmf_invalid 00:09:00.241 ************************************ 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:00.241 * Looking for test storage... 00:09:00.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.241 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.242 23:51:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.816 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:06.817 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:06.817 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:06.817 Found net devices under 0000:af:00.0: cvl_0_0 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:06.817 Found net devices under 0000:af:00.1: cvl_0_1 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.817 23:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:09:06.817 00:09:06.817 --- 10.0.0.2 ping statistics --- 00:09:06.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.817 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:09:06.817 00:09:06.817 --- 10.0.0.1 ping statistics --- 00:09:06.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.817 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3464485 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3464485 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3464485 ']' 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:06.817 23:51:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.817 [2024-05-14 23:51:07.330753] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:06.817 [2024-05-14 23:51:07.330802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.817 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.817 [2024-05-14 23:51:07.404516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.077 [2024-05-14 23:51:07.486169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.077 [2024-05-14 23:51:07.486209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.077 [2024-05-14 23:51:07.486219] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.077 [2024-05-14 23:51:07.486228] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.077 [2024-05-14 23:51:07.486235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.077 [2024-05-14 23:51:07.486280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.077 [2024-05-14 23:51:07.486376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.077 [2024-05-14 23:51:07.486461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.077 [2024-05-14 23:51:07.486463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.645 23:51:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:07.645 23:51:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:07.645 23:51:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.645 23:51:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.645 23:51:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:07.646 23:51:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.646 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:07.646 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17157 00:09:07.905 [2024-05-14 23:51:08.331469] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:07.905 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:07.905 { 00:09:07.905 "nqn": "nqn.2016-06.io.spdk:cnode17157", 00:09:07.905 "tgt_name": "foobar", 00:09:07.905 "method": "nvmf_create_subsystem", 00:09:07.905 "req_id": 1 00:09:07.905 } 00:09:07.905 Got JSON-RPC error response 00:09:07.905 response: 00:09:07.905 { 00:09:07.905 "code": -32603, 00:09:07.905 "message": "Unable to find target foobar" 00:09:07.905 }' 00:09:07.905 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:07.905 { 00:09:07.905 "nqn": "nqn.2016-06.io.spdk:cnode17157", 00:09:07.905 "tgt_name": "foobar", 00:09:07.905 "method": "nvmf_create_subsystem", 00:09:07.905 "req_id": 1 00:09:07.905 } 00:09:07.905 Got JSON-RPC error response 00:09:07.905 response: 00:09:07.905 { 00:09:07.905 "code": -32603, 00:09:07.905 "message": "Unable to find target foobar" 00:09:07.905 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:07.905 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:07.905 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10668 00:09:08.165 [2024-05-14 23:51:08.520174] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10668: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:08.165 { 00:09:08.165 "nqn": "nqn.2016-06.io.spdk:cnode10668", 00:09:08.165 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:08.165 "method": "nvmf_create_subsystem", 00:09:08.165 "req_id": 1 00:09:08.165 } 00:09:08.165 Got JSON-RPC error response 00:09:08.165 response: 00:09:08.165 { 00:09:08.165 "code": -32602, 00:09:08.165 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:08.165 }' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:08.165 { 00:09:08.165 "nqn": "nqn.2016-06.io.spdk:cnode10668", 00:09:08.165 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:08.165 "method": "nvmf_create_subsystem", 00:09:08.165 "req_id": 1 00:09:08.165 } 00:09:08.165 Got JSON-RPC error response 00:09:08.165 response: 00:09:08.165 { 00:09:08.165 "code": -32602, 00:09:08.165 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:08.165 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11753 00:09:08.165 [2024-05-14 23:51:08.708768] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11753: invalid model number 'SPDK_Controller' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:08.165 { 00:09:08.165 "nqn": "nqn.2016-06.io.spdk:cnode11753", 00:09:08.165 "model_number": "SPDK_Controller\u001f", 00:09:08.165 "method": "nvmf_create_subsystem", 00:09:08.165 "req_id": 1 00:09:08.165 } 00:09:08.165 Got JSON-RPC error response 00:09:08.165 response: 00:09:08.165 { 00:09:08.165 "code": -32602, 00:09:08.165 "message": "Invalid MN SPDK_Controller\u001f" 00:09:08.165 }' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:08.165 { 00:09:08.165 "nqn": "nqn.2016-06.io.spdk:cnode11753", 00:09:08.165 "model_number": "SPDK_Controller\u001f", 00:09:08.165 "method": "nvmf_create_subsystem", 00:09:08.165 "req_id": 1 00:09:08.165 } 00:09:08.165 Got JSON-RPC error response 00:09:08.165 response: 00:09:08.165 { 00:09:08.165 "code": -32602, 00:09:08.165 "message": "Invalid MN SPDK_Controller\u001f" 00:09:08.165 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.165 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '#wtI=?P(\k#gU\X3m(Vm/' 00:09:08.426 23:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '#wtI=?P(\k#gU\X3m(Vm/' nqn.2016-06.io.spdk:cnode4733 00:09:08.687 [2024-05-14 23:51:09.057941] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4733: invalid serial number '#wtI=?P(\k#gU\X3m(Vm/' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:08.687 { 00:09:08.687 "nqn": "nqn.2016-06.io.spdk:cnode4733", 00:09:08.687 "serial_number": "#wtI=?P(\\k#gU\\X3m(Vm/", 00:09:08.687 "method": "nvmf_create_subsystem", 00:09:08.687 "req_id": 1 00:09:08.687 } 00:09:08.687 Got JSON-RPC error response 00:09:08.687 response: 00:09:08.687 { 00:09:08.687 "code": -32602, 00:09:08.687 "message": "Invalid SN #wtI=?P(\\k#gU\\X3m(Vm/" 00:09:08.687 }' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:08.687 { 00:09:08.687 "nqn": "nqn.2016-06.io.spdk:cnode4733", 00:09:08.687 "serial_number": "#wtI=?P(\\k#gU\\X3m(Vm/", 00:09:08.687 "method": "nvmf_create_subsystem", 00:09:08.687 "req_id": 1 00:09:08.687 } 00:09:08.687 Got JSON-RPC error response 00:09:08.687 response: 00:09:08.687 { 00:09:08.687 "code": -32602, 00:09:08.687 "message": "Invalid SN #wtI=?P(\\k#gU\\X3m(Vm/" 00:09:08.687 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:08.687 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:08.688 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.948 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 't/"R#cI8)k>F?y/\?2r0XIz6'\''&s1R9v*gCE65`] 5' 00:09:08.949 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 't/"R#cI8)k>F?y/\?2r0XIz6'\''&s1R9v*gCE65`] 5' nqn.2016-06.io.spdk:cnode24232 00:09:09.208 [2024-05-14 23:51:09.567651] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24232: invalid model number 't/"R#cI8)k>F?y/\?2r0XIz6'&s1R9v*gCE65`] 5' 00:09:09.208 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:09.208 { 00:09:09.208 "nqn": "nqn.2016-06.io.spdk:cnode24232", 00:09:09.208 "model_number": "t/\"R#cI8)k>F?y/\\?2r0XIz6'\''&s1R9v*gCE65`] 5", 00:09:09.208 "method": "nvmf_create_subsystem", 00:09:09.208 "req_id": 1 00:09:09.208 } 00:09:09.208 Got JSON-RPC error response 00:09:09.208 response: 00:09:09.208 { 00:09:09.208 "code": -32602, 00:09:09.208 "message": "Invalid MN t/\"R#cI8)k>F?y/\\?2r0XIz6'\''&s1R9v*gCE65`] 5" 00:09:09.208 }' 00:09:09.208 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:09.208 { 00:09:09.208 "nqn": "nqn.2016-06.io.spdk:cnode24232", 00:09:09.208 "model_number": "t/\"R#cI8)k>F?y/\\?2r0XIz6'&s1R9v*gCE65`] 5", 00:09:09.208 "method": "nvmf_create_subsystem", 00:09:09.208 "req_id": 1 00:09:09.208 } 00:09:09.208 Got JSON-RPC error response 00:09:09.208 response: 00:09:09.208 { 00:09:09.208 "code": -32602, 00:09:09.208 "message": "Invalid MN t/\"R#cI8)k>F?y/\\?2r0XIz6'&s1R9v*gCE65`] 5" 00:09:09.208 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:09.208 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:09.208 [2024-05-14 23:51:09.752324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.208 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:09.468 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:09.468 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:09.468 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:09.468 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:09.468 23:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:09.728 [2024-05-14 23:51:10.133534] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:09.728 [2024-05-14 23:51:10.133610] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:09.728 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:09.728 { 00:09:09.728 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:09.728 "listen_address": { 00:09:09.728 "trtype": "tcp", 00:09:09.728 "traddr": "", 00:09:09.728 "trsvcid": "4421" 00:09:09.728 }, 00:09:09.728 "method": "nvmf_subsystem_remove_listener", 00:09:09.728 "req_id": 1 00:09:09.728 } 00:09:09.728 Got JSON-RPC error response 00:09:09.728 response: 00:09:09.728 { 00:09:09.728 "code": -32602, 00:09:09.728 "message": "Invalid parameters" 00:09:09.728 }' 00:09:09.728 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:09.728 { 00:09:09.728 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:09.728 "listen_address": { 00:09:09.728 "trtype": "tcp", 00:09:09.728 "traddr": "", 00:09:09.728 "trsvcid": "4421" 00:09:09.728 }, 00:09:09.728 "method": "nvmf_subsystem_remove_listener", 00:09:09.728 "req_id": 1 00:09:09.728 } 00:09:09.728 Got JSON-RPC error response 00:09:09.728 response: 00:09:09.728 { 00:09:09.728 "code": -32602, 00:09:09.728 "message": "Invalid parameters" 00:09:09.728 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:09.728 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21658 -i 0 00:09:09.728 [2024-05-14 23:51:10.310120] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21658: invalid cntlid range [0-65519] 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:09.987 { 00:09:09.987 "nqn": "nqn.2016-06.io.spdk:cnode21658", 00:09:09.987 "min_cntlid": 0, 00:09:09.987 "method": "nvmf_create_subsystem", 00:09:09.987 "req_id": 1 00:09:09.987 } 00:09:09.987 Got JSON-RPC error response 00:09:09.987 response: 00:09:09.987 { 00:09:09.987 "code": -32602, 00:09:09.987 "message": "Invalid cntlid range [0-65519]" 00:09:09.987 }' 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:09.987 { 00:09:09.987 "nqn": "nqn.2016-06.io.spdk:cnode21658", 00:09:09.987 "min_cntlid": 0, 00:09:09.987 "method": "nvmf_create_subsystem", 00:09:09.987 "req_id": 1 00:09:09.987 } 00:09:09.987 Got JSON-RPC error response 00:09:09.987 response: 00:09:09.987 { 00:09:09.987 "code": -32602, 00:09:09.987 "message": "Invalid cntlid range [0-65519]" 00:09:09.987 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2141 -i 65520 00:09:09.987 [2024-05-14 23:51:10.502805] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2141: invalid cntlid range [65520-65519] 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:09.987 { 00:09:09.987 "nqn": "nqn.2016-06.io.spdk:cnode2141", 00:09:09.987 "min_cntlid": 65520, 00:09:09.987 "method": "nvmf_create_subsystem", 00:09:09.987 "req_id": 1 00:09:09.987 } 00:09:09.987 Got JSON-RPC error response 00:09:09.987 response: 00:09:09.987 { 00:09:09.987 "code": -32602, 00:09:09.987 "message": "Invalid cntlid range [65520-65519]" 00:09:09.987 }' 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:09.987 { 00:09:09.987 "nqn": "nqn.2016-06.io.spdk:cnode2141", 00:09:09.987 "min_cntlid": 65520, 00:09:09.987 "method": "nvmf_create_subsystem", 00:09:09.987 "req_id": 1 00:09:09.987 } 00:09:09.987 Got JSON-RPC error response 00:09:09.987 response: 00:09:09.987 { 00:09:09.987 "code": -32602, 00:09:09.987 "message": "Invalid cntlid range [65520-65519]" 00:09:09.987 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.987 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2500 -I 0 00:09:10.246 [2024-05-14 23:51:10.695387] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2500: invalid cntlid range [1-0] 00:09:10.246 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:10.246 { 00:09:10.246 "nqn": "nqn.2016-06.io.spdk:cnode2500", 00:09:10.246 "max_cntlid": 0, 00:09:10.246 "method": "nvmf_create_subsystem", 00:09:10.246 "req_id": 1 00:09:10.246 } 00:09:10.246 Got JSON-RPC error response 00:09:10.246 response: 00:09:10.246 { 00:09:10.246 "code": -32602, 00:09:10.246 "message": "Invalid cntlid range [1-0]" 00:09:10.246 }' 00:09:10.246 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:10.246 { 00:09:10.246 "nqn": "nqn.2016-06.io.spdk:cnode2500", 00:09:10.246 "max_cntlid": 0, 00:09:10.246 "method": "nvmf_create_subsystem", 00:09:10.246 "req_id": 1 00:09:10.246 } 00:09:10.246 Got JSON-RPC error response 00:09:10.246 response: 00:09:10.246 { 00:09:10.246 "code": -32602, 00:09:10.246 "message": "Invalid cntlid range [1-0]" 00:09:10.246 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:10.246 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5124 -I 65520 00:09:10.506 [2024-05-14 23:51:10.884024] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5124: invalid cntlid range [1-65520] 00:09:10.506 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:10.506 { 00:09:10.506 "nqn": "nqn.2016-06.io.spdk:cnode5124", 00:09:10.506 "max_cntlid": 65520, 00:09:10.506 "method": "nvmf_create_subsystem", 00:09:10.506 "req_id": 1 00:09:10.506 } 00:09:10.506 Got JSON-RPC error response 00:09:10.506 response: 00:09:10.506 { 00:09:10.506 "code": -32602, 00:09:10.506 "message": "Invalid cntlid range [1-65520]" 00:09:10.506 }' 00:09:10.506 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:10.506 { 00:09:10.506 "nqn": "nqn.2016-06.io.spdk:cnode5124", 00:09:10.506 "max_cntlid": 65520, 00:09:10.506 "method": "nvmf_create_subsystem", 00:09:10.506 "req_id": 1 00:09:10.506 } 00:09:10.506 Got JSON-RPC error response 00:09:10.506 response: 00:09:10.506 { 00:09:10.506 "code": -32602, 00:09:10.506 "message": "Invalid cntlid range [1-65520]" 00:09:10.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:10.506 23:51:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18059 -i 6 -I 5 00:09:10.506 [2024-05-14 23:51:11.076658] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18059: invalid cntlid range [6-5] 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:10.766 { 00:09:10.766 "nqn": "nqn.2016-06.io.spdk:cnode18059", 00:09:10.766 "min_cntlid": 6, 00:09:10.766 "max_cntlid": 5, 00:09:10.766 "method": "nvmf_create_subsystem", 00:09:10.766 "req_id": 1 00:09:10.766 } 00:09:10.766 Got JSON-RPC error response 00:09:10.766 response: 00:09:10.766 { 00:09:10.766 "code": -32602, 00:09:10.766 "message": "Invalid cntlid range [6-5]" 00:09:10.766 }' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:10.766 { 00:09:10.766 "nqn": "nqn.2016-06.io.spdk:cnode18059", 00:09:10.766 "min_cntlid": 6, 00:09:10.766 "max_cntlid": 5, 00:09:10.766 "method": "nvmf_create_subsystem", 00:09:10.766 "req_id": 1 00:09:10.766 } 00:09:10.766 Got JSON-RPC error response 00:09:10.766 response: 00:09:10.766 { 00:09:10.766 "code": -32602, 00:09:10.766 "message": "Invalid cntlid range [6-5]" 00:09:10.766 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:10.766 { 00:09:10.766 "name": "foobar", 00:09:10.766 "method": "nvmf_delete_target", 00:09:10.766 "req_id": 1 00:09:10.766 } 00:09:10.766 Got JSON-RPC error response 00:09:10.766 response: 00:09:10.766 { 00:09:10.766 "code": -32602, 00:09:10.766 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:10.766 }' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:10.766 { 00:09:10.766 "name": "foobar", 00:09:10.766 "method": "nvmf_delete_target", 00:09:10.766 "req_id": 1 00:09:10.766 } 00:09:10.766 Got JSON-RPC error response 00:09:10.766 response: 00:09:10.766 { 00:09:10.766 "code": -32602, 00:09:10.766 "message": "The specified target doesn't exist, cannot delete it." 00:09:10.766 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.766 rmmod nvme_tcp 00:09:10.766 rmmod nvme_fabrics 00:09:10.766 rmmod nvme_keyring 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3464485 ']' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3464485 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3464485 ']' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3464485 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3464485 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3464485' 00:09:10.766 killing process with pid 3464485 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3464485 00:09:10.766 [2024-05-14 23:51:11.340210] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:10.766 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3464485 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.025 23:51:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.642 23:51:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.642 00:09:13.642 real 0m13.346s 00:09:13.642 user 0m20.253s 00:09:13.642 sys 0m6.353s 00:09:13.642 23:51:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.642 23:51:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:13.642 ************************************ 00:09:13.642 END TEST nvmf_invalid 00:09:13.642 ************************************ 00:09:13.642 23:51:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:13.642 23:51:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:13.642 23:51:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.642 23:51:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.642 ************************************ 00:09:13.642 START TEST nvmf_abort 00:09:13.642 ************************************ 00:09:13.642 23:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:13.642 * Looking for test storage... 00:09:13.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.643 23:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:20.217 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.217 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:20.218 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:20.218 Found net devices under 0000:af:00.0: cvl_0_0 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:20.218 Found net devices under 0000:af:00.1: cvl_0_1 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:09:20.218 00:09:20.218 --- 10.0.0.2 ping statistics --- 00:09:20.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.218 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:20.218 00:09:20.218 --- 10.0.0.1 ping statistics --- 00:09:20.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.218 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3469096 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3469096 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3469096 ']' 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.218 23:51:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.218 [2024-05-14 23:51:20.447463] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:20.218 [2024-05-14 23:51:20.447510] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.218 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.218 [2024-05-14 23:51:20.521993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.218 [2024-05-14 23:51:20.595128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.218 [2024-05-14 23:51:20.595165] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.218 [2024-05-14 23:51:20.595175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.218 [2024-05-14 23:51:20.595185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.218 [2024-05-14 23:51:20.595196] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.219 [2024-05-14 23:51:20.595323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.219 [2024-05-14 23:51:20.595407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.219 [2024-05-14 23:51:20.595409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.787 [2024-05-14 23:51:21.303904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.787 Malloc0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.787 Delay0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.787 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.046 [2024-05-14 23:51:21.385822] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:21.046 [2024-05-14 23:51:21.386082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.046 23:51:21 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:21.046 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.046 [2024-05-14 23:51:21.503340] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:23.586 Initializing NVMe Controllers 00:09:23.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.586 controller IO queue size 128 less than required 00:09:23.586 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:23.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:23.586 Initialization complete. Launching workers. 00:09:23.586 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41396 00:09:23.586 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41457, failed to submit 62 00:09:23.586 success 41400, unsuccess 57, failed 0 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.586 rmmod nvme_tcp 00:09:23.586 rmmod nvme_fabrics 00:09:23.586 rmmod nvme_keyring 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3469096 ']' 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3469096 00:09:23.586 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3469096 ']' 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3469096 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3469096 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3469096' 00:09:23.587 killing process with pid 3469096 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3469096 00:09:23.587 [2024-05-14 23:51:23.750950] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3469096 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.587 23:51:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.499 23:51:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.499 00:09:25.499 real 0m12.343s 00:09:25.499 user 0m13.374s 00:09:25.499 sys 0m6.194s 00:09:25.499 23:51:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.499 23:51:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.499 ************************************ 00:09:25.499 END TEST nvmf_abort 00:09:25.499 ************************************ 00:09:25.499 23:51:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.499 23:51:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:25.499 23:51:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.499 23:51:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.759 ************************************ 00:09:25.759 START TEST nvmf_ns_hotplug_stress 00:09:25.759 ************************************ 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.759 * Looking for test storage... 00:09:25.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.759 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.760 23:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:32.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:32.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:32.334 Found net devices under 0000:af:00.0: cvl_0_0 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:32.334 Found net devices under 0000:af:00.1: cvl_0_1 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.334 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.335 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.335 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:09:32.594 00:09:32.594 --- 10.0.0.2 ping statistics --- 00:09:32.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.594 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:09:32.594 00:09:32.594 --- 10.0.0.1 ping statistics --- 00:09:32.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.594 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.594 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.595 23:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3473397 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3473397 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3473397 ']' 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:32.595 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.595 [2024-05-14 23:51:33.079905] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:09:32.595 [2024-05-14 23:51:33.079951] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.595 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.595 [2024-05-14 23:51:33.153057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.854 [2024-05-14 23:51:33.225160] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.854 [2024-05-14 23:51:33.225204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.854 [2024-05-14 23:51:33.225214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.854 [2024-05-14 23:51:33.225223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.854 [2024-05-14 23:51:33.225231] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.854 [2024-05-14 23:51:33.225333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.854 [2024-05-14 23:51:33.225419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.854 [2024-05-14 23:51:33.225421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:33.420 23:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.678 [2024-05-14 23:51:34.077734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.678 23:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.952 23:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.952 [2024-05-14 23:51:34.459390] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:33.952 [2024-05-14 23:51:34.459678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.952 23:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.210 23:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:34.469 Malloc0 00:09:34.469 23:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.469 Delay0 00:09:34.469 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.727 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:34.988 NULL1 00:09:34.988 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:35.262 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:35.262 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3473723 00:09:35.262 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:35.262 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.262 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.262 Read completed with error (sct=0, sc=11) 00:09:35.262 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.534 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:35.534 23:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:35.793 true 00:09:35.793 23:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:35.793 23:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.730 23:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.730 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:36.730 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:36.988 true 00:09:36.988 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:36.988 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.988 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.247 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:37.247 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:37.506 true 00:09:37.506 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:37.506 23:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.506 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.765 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:37.765 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:38.024 true 00:09:38.024 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:38.024 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.024 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.283 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:38.283 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:38.541 true 00:09:38.541 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:38.541 23:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.917 23:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.917 23:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:39.917 23:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:39.917 true 00:09:40.175 23:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:40.175 23:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.001 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.001 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:41.001 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:41.260 true 00:09:41.260 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:41.260 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.519 23:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.519 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:41.519 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:41.778 true 00:09:41.778 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:41.778 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.036 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.036 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:42.036 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:42.295 true 00:09:42.295 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:42.295 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.555 23:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.555 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:42.555 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:42.815 true 00:09:42.815 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:42.815 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.075 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.334 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:43.334 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:43.334 true 00:09:43.334 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:43.334 23:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.593 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.852 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:43.852 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:43.852 true 00:09:43.852 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:43.852 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.111 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.370 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:44.370 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:44.629 true 00:09:44.629 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:44.629 23:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.567 23:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.567 23:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:45.567 23:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:45.567 true 00:09:45.826 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:45.826 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.826 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.084 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:46.084 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:46.343 true 00:09:46.343 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:46.343 23:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.720 23:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.720 23:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:47.720 23:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:47.720 true 00:09:47.720 23:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:47.720 23:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.656 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.950 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:48.950 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:48.950 true 00:09:48.950 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:48.950 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.209 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.468 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:49.468 23:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:49.468 true 00:09:49.468 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:49.468 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.727 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.985 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:49.985 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:49.985 true 00:09:49.985 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:49.985 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.244 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.504 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:50.504 23:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:50.504 true 00:09:50.763 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:50.763 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.764 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.024 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:51.024 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:51.284 true 00:09:51.284 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:51.284 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.284 23:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.544 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:51.544 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:51.803 true 00:09:51.803 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:51.803 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.803 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.074 [2024-05-14 23:51:52.554616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.554959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.555994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.556985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.074 [2024-05-14 23:51:52.557751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.557799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.558992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.559988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.560852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.561997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.562986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.563030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.075 [2024-05-14 23:51:52.563077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.563996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.564982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.076 [2024-05-14 23:51:52.565074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.565967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.566973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.567567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.076 [2024-05-14 23:51:52.568588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.568967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.569997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.570928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.571972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.572993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.077 [2024-05-14 23:51:52.573622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.573974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.574970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.575955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.576997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.577823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.578212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.078 [2024-05-14 23:51:52.578267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.578955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.579993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.580942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.581968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.582977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.079 [2024-05-14 23:51:52.583362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:52.080 [2024-05-14 23:51:52.583852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.583999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:52.080 [2024-05-14 23:51:52.584189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.584975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.585989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.586968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.587692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.080 [2024-05-14 23:51:52.588896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.588944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.588995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.589969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.590971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.591969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.592987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.593997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.594038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.594087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.081 [2024-05-14 23:51:52.594132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.594599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.595997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.596994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.597992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.598965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.082 [2024-05-14 23:51:52.599759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.599804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.599849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.599895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.599931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.599981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.600971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.601591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.602987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.603970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.604015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.604060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.604108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.604154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.083 [2024-05-14 23:51:52.604205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.604958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.605991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.606985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.607996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.608646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.084 [2024-05-14 23:51:52.609967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.610975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.611997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.612957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.613965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.614955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.085 [2024-05-14 23:51:52.615301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.615636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.616954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.617962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.618966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.086 [2024-05-14 23:51:52.619737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.619986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.086 [2024-05-14 23:51:52.620943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.620986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.621965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.622524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.623990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.624958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.087 [2024-05-14 23:51:52.625524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.625970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.626951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.627983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.628996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.629496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.630988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.088 [2024-05-14 23:51:52.631287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.631952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.632982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.633992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.634983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.635957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.089 [2024-05-14 23:51:52.636612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.637993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.638955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.639927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.640959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.641997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.642041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.642083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.642118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.642163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.090 [2024-05-14 23:51:52.642214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.642998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.643951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.644989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.645953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.091 [2024-05-14 23:51:52.646575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.646619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.646660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.646700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.646747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.646930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.647962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.648989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.649986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.650954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.651975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.092 [2024-05-14 23:51:52.652442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.652993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.093 [2024-05-14 23:51:52.653951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.654996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.655944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.379 [2024-05-14 23:51:52.656874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.656914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.656964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.657972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.658963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.659962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.660830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.661998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.380 [2024-05-14 23:51:52.662600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.662972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.663976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.664970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.665998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.666967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.667977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.668020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.668390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.381 [2024-05-14 23:51:52.668439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.668951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.669991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.670966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.671963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.672986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.382 [2024-05-14 23:51:52.673544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.673989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.674862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 Message suppressed 999 times: [2024-05-14 23:51:52.674913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 Read completed with error (sct=0, sc=15) 00:09:52.383 [2024-05-14 23:51:52.675301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.675965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.676997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.677958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.383 [2024-05-14 23:51:52.678001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.678956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.679962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.680988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.681955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.682965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.384 [2024-05-14 23:51:52.683757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.683797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.683838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.683887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.683937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.683986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.684950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.685962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.686994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.687962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.688979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.385 [2024-05-14 23:51:52.689626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.689979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.690971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.691951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.692993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.693981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.386 [2024-05-14 23:51:52.694900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.694946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.694994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.695987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.696971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.697995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.698971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.699973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.387 [2024-05-14 23:51:52.700809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.700858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.700907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.700959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.701985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.702981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.703999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.704995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.388 [2024-05-14 23:51:52.705390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.705978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.706971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.707993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.708965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.709799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.710965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.389 [2024-05-14 23:51:52.711324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.711960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.712968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.713996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.714960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.715981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.390 [2024-05-14 23:51:52.716644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.716693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.716744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.716794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.716844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.716895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.717994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.718968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.719973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.720458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.721993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.391 [2024-05-14 23:51:52.722705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.722987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.723958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.724967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.725991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.726990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.392 [2024-05-14 23:51:52.727348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.727612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 Message suppressed 999 times: [2024-05-14 23:51:52.728536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 Read completed with error (sct=0, sc=15) 00:09:52.393 [2024-05-14 23:51:52.728580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.728983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.729994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.730990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.731968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.393 [2024-05-14 23:51:52.732977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.733958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.734678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.735972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.736976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.737956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.394 [2024-05-14 23:51:52.738822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.738870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.738918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.738970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 true 00:09:52.395 [2024-05-14 23:51:52.739891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.739990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.740950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.741666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.742993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.743956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.744004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.395 [2024-05-14 23:51:52.744054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.744983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.745973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.746964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.747992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.748537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.749067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.396 [2024-05-14 23:51:52.749115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.749958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.750996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.751998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.752977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.753995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.397 [2024-05-14 23:51:52.754389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.754967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.755975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.756998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.757995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.758870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.398 [2024-05-14 23:51:52.759974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.760967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.761967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.762960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:52.399 [2024-05-14 23:51:52.763694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.763997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.399 [2024-05-14 23:51:52.764048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.399 [2024-05-14 23:51:52.764976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.765958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.766967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.767972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.768981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.769964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.400 [2024-05-14 23:51:52.770657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.770990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.771992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.772645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.773952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.774993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.401 [2024-05-14 23:51:52.775285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.775964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.776972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.777993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.778976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.779712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.402 [2024-05-14 23:51:52.780896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.780939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.780983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.781951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.782964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.403 [2024-05-14 23:51:52.783911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.783952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.784992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.403 [2024-05-14 23:51:52.785918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.785966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.786591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.787982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.788968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.789988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.790992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.404 [2024-05-14 23:51:52.791901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.791943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.791985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.792984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.793790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.794965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.795982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.405 [2024-05-14 23:51:52.796334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.796987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.797975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.798982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.799972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.800738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.801963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.406 [2024-05-14 23:51:52.802234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.802969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.803968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.804980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.805994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.806972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.807587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.808117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.808172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.808225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.808275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.407 [2024-05-14 23:51:52.808325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.808985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.809996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.810961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.811998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.812992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.408 [2024-05-14 23:51:52.813764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.813814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.813860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.813912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.813958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.814485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.815994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.816975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.817873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.818978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.409 [2024-05-14 23:51:52.819246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.819970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.820952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.821966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.822966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.823965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.824959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.410 [2024-05-14 23:51:52.825740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.825994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.826994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.827994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.828974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.829980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.830978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.831873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.411 [2024-05-14 23:51:52.832259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.832995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.833977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.834984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.835991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.836976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.837965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.412 [2024-05-14 23:51:52.838362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.838871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.413 [2024-05-14 23:51:52.839233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.839990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.840997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.841993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.842980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.413 [2024-05-14 23:51:52.843254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.843958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.844985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.845617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.846960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.847994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.848992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.849999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.414 [2024-05-14 23:51:52.850047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.850958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.851983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.852721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.853972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.854972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.855975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.415 [2024-05-14 23:51:52.856924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.856968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.857965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.858955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.859535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.860966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.861978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.862923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.863954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.416 [2024-05-14 23:51:52.864371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.864970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.865995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.866965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.867980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.868994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.869949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.417 [2024-05-14 23:51:52.870753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.870801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.870850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.870903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.870953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.871958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.872989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.873424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.874975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.875960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.876976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.877995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.878037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.878079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.878123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.418 [2024-05-14 23:51:52.878166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.878988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.879991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.880976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.881996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.882991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.883923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.419 [2024-05-14 23:51:52.884958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.885956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.886990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.887687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.888981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.420 [2024-05-14 23:51:52.889965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.890985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.891988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.421 [2024-05-14 23:51:52.892027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.892998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.893973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.894734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.421 [2024-05-14 23:51:52.895424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.895997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.896981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.897847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.898994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.899985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.422 [2024-05-14 23:51:52.900532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.900970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.901985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.902991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.903996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.904745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.905969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.906018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.906063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.906108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.906149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.423 [2024-05-14 23:51:52.906198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.906953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.907994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.908954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.909966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.910977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.424 [2024-05-14 23:51:52.911513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.911560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.911604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.911647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.911680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.911722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.912995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.913988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.914991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.915990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.425 [2024-05-14 23:51:52.916579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.916995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.917974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.918698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.919962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.920997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.426 [2024-05-14 23:51:52.921883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.921932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.921984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.922986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.923967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.924956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.925690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.926973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.427 [2024-05-14 23:51:52.927710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.927995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.928987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 [2024-05-14 23:51:52.929553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.707 23:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.707 [2024-05-14 23:51:53.120436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.120998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.121960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.122968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.123975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.124026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.124057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.124099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.707 [2024-05-14 23:51:53.124140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.124994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.125988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.126858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.127963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.708 [2024-05-14 23:51:53.128857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.128896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.128936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.128982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.129971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.130999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.131970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.132962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.709 [2024-05-14 23:51:53.133991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.134980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:52.710 [2024-05-14 23:51:53.135276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:52.710 [2024-05-14 23:51:53.135666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.135966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.136690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.710 [2024-05-14 23:51:53.137727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.137964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.710 [2024-05-14 23:51:53.138896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.138937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.138979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.139994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.140987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.141976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.711 [2024-05-14 23:51:53.142872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.142921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.142972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.143626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.144996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.145970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.146958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.147969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.712 [2024-05-14 23:51:53.148414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.148954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.149985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.150744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.151967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.152958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.713 [2024-05-14 23:51:53.153543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.153988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.154969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.155983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.156965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.157625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.158952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.159011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.159059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.159108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.714 [2024-05-14 23:51:53.159159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.159961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.160979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.161985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.162999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.163978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.715 [2024-05-14 23:51:53.164022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.164496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.165963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.166984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.167968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.168012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.168225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.716 [2024-05-14 23:51:53.168602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.168991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.169973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.170969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.171985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.172979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.717 [2024-05-14 23:51:53.173573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.173992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.174948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.175973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.176975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.177978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.178992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.718 [2024-05-14 23:51:53.179033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.179974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.180970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.181873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.182975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.719 [2024-05-14 23:51:53.183847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.183894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.183940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.183995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.184981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.185962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.186964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.187960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.720 [2024-05-14 23:51:53.188459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.188913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.189969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.190955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.191999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.721 [2024-05-14 23:51:53.192860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.192916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.192967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.722 [2024-05-14 23:51:53.193166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.193984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.194991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.195901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.196971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.722 [2024-05-14 23:51:53.197447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.197997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.198992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.199991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.200958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.201997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.723 [2024-05-14 23:51:53.202527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.202862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.203977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.204982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.205971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.206984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.207952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.724 [2024-05-14 23:51:53.208452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.208969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.209959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.210977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.211981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.212987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.725 [2024-05-14 23:51:53.213402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.213446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.213967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.214991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.215984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.216982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.217961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.726 [2024-05-14 23:51:53.218891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.218942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.218982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.219989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.220991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.221999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.222958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.727 [2024-05-14 23:51:53.223997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.224976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.225992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.226982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.227603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.228975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.728 [2024-05-14 23:51:53.229652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.229995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.230997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.231996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.232956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.233963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.234567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.235097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.235143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.235186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.235234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.729 [2024-05-14 23:51:53.235281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.235995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.236975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.237976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.238980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.730 [2024-05-14 23:51:53.239697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.239997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.240957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.241575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.242984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.243976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.244036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.244084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.244133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.244180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.731 [2024-05-14 23:51:53.244236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.244990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.245994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.246952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.247990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.248472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:52.732 [2024-05-14 23:51:53.249210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.249959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.250007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.732 [2024-05-14 23:51:53.250058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.250980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.251962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.252978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.253972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.254999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.733 [2024-05-14 23:51:53.255585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.255633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.255683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.255735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.256957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.257959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.258957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.259976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.260975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.734 [2024-05-14 23:51:53.261372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.261976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.262806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.263993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.264994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.265038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.735 [2024-05-14 23:51:53.265080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.265981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.266988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.267960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.268951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.269770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.270318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.270373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.270424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.736 [2024-05-14 23:51:53.270477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.270961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.271979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.272976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.273978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.274982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.737 [2024-05-14 23:51:53.275586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.275981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.276756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.277993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:52.738 [2024-05-14 23:51:53.278787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.278832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.278883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.278917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.278961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.279982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.280973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.281984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.282024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.282065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.012 [2024-05-14 23:51:53.282109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.282959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.283661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.284981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.285962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.286975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.287979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.288978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.289963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.290659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.291954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.013 [2024-05-14 23:51:53.292338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.292985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.293968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.294988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.295985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.296993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.297580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.298984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.299972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.300971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.301968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.014 [2024-05-14 23:51:53.302579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.302963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.303960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.304703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.015 [2024-05-14 23:51:53.305260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 true 00:09:53.015 [2024-05-14 23:51:53.305427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.305974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.306957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.307972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.308977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.309961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.310988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.015 [2024-05-14 23:51:53.311594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.311998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.312981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.313983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.314966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.315512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.316958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:53.016 [2024-05-14 23:51:53.317157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.016 [2024-05-14 23:51:53.317517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.317956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.318986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.319977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.320964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.016 [2024-05-14 23:51:53.321986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.322707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.323977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.324982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.325973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.326976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.327990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.328966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.329735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.330998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.331039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.331081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.331125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.017 [2024-05-14 23:51:53.331169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.331996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.332997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.333994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.334987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.335980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.336589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.337993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.018 [2024-05-14 23:51:53.338316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.338959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.339981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.340979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.341952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.342979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.343490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.344973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.345020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.345071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.019 [2024-05-14 23:51:53.345122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.345977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.346988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.347960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.348948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.349953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.350989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.020 [2024-05-14 23:51:53.351037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.351951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.352967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.353966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.354948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.355979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.356999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.021 [2024-05-14 23:51:53.357433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.357977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.022 [2024-05-14 23:51:53.358531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.358981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.359973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.360987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.361971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.362977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.363983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.022 [2024-05-14 23:51:53.364399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.364447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.364990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.365964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.366992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.367919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.368983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.369997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.370985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.371984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.372970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.023 [2024-05-14 23:51:53.373418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.373970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.374847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.375999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.376992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.377987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.378967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.379984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.380982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.381839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.024 [2024-05-14 23:51:53.382793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.382840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.382882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.382935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.382984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.383965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.384981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.385989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.386982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.387991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.388828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.389985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.390996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.391984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.392987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.393037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.393092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.393141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.025 [2024-05-14 23:51:53.393189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.393973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.394969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.395825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.396988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.397959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.398993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.399978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.400993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.026 [2024-05-14 23:51:53.401997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.402697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.403987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.404988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.405998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.406955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.407959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.408991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.409671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.410985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.027 [2024-05-14 23:51:53.411926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.411982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.412740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 Message suppressed 999 times: [2024-05-14 23:51:53.413597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 Read completed with error (sct=0, sc=15) 00:09:53.028 [2024-05-14 23:51:53.413651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.413999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.414958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.415958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.416962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.417977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.418957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.419984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.028 [2024-05-14 23:51:53.420864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.420908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.420944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.420984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.421972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.422932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.423988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.424972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.425946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.426957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.427956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.428983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.429466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.029 [2024-05-14 23:51:53.430698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.430751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.430804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.430855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.430900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.430948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.431984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.432967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.433997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.434973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.435953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.436958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.437959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.438986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.439804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.030 [2024-05-14 23:51:53.440803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.440852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.440903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.440952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.441965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.442976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.443959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.444975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.445970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.446839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.447964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.448959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.031 [2024-05-14 23:51:53.449652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.449985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.450973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.451981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.452958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.453726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.454996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.455953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.456995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.457991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.458997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.032 [2024-05-14 23:51:53.459760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.459807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.459852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.459900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.459946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.459991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.460637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.461978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.462974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.463997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.464960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.465953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.466998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.467605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.033 [2024-05-14 23:51:53.468238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.468975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.469017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.469060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.033 [2024-05-14 23:51:53.469103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.469996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.470966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.471969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.472971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.473967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.474548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.475971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.476960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.477983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.478971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.034 [2024-05-14 23:51:53.479290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.479999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.480966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 [2024-05-14 23:51:53.481519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.035 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.307 [2024-05-14 23:51:53.688461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.307 [2024-05-14 23:51:53.688977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.689986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.690967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.691971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.692979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.308 [2024-05-14 23:51:53.693521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.693984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.694615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.695960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.696960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.697938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.698453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.309 [2024-05-14 23:51:53.698499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.698947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.699972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.700973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.701976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.702984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.310 [2024-05-14 23:51:53.703021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.703982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.704991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.311 [2024-05-14 23:51:53.705257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.705985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.706952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.311 [2024-05-14 23:51:53.707657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.707706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.707751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.707790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.707832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.708978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.709996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.710975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.312 [2024-05-14 23:51:53.711244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.711827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.711880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.711927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.711971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.712970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.713983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.714730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.715956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.313 [2024-05-14 23:51:53.716656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.716952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.717991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.718982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.719988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.720982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 [2024-05-14 23:51:53.721462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.314 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:53.314 [2024-05-14 23:51:53.721511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.721554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.721602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.721647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.721691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.721738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:53.315 [2024-05-14 23:51:53.722284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.722991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.723966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.724979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.725974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.315 [2024-05-14 23:51:53.726421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.726952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.727959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.728487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.729971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.316 [2024-05-14 23:51:53.730933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.730980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.731860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.732972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.733956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.734999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.317 [2024-05-14 23:51:53.735046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.735970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.736993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.737972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.738628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.318 [2024-05-14 23:51:53.739915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.739965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.740980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.741973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.742984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.743987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.319 [2024-05-14 23:51:53.744625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.744974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.745577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.746986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.747955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.748992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.320 [2024-05-14 23:51:53.749831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.749879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.749941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.749985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.750970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.751970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.752598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.753986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.321 [2024-05-14 23:51:53.754539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.754994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.755958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.756987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.757999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.322 [2024-05-14 23:51:53.758457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.758949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.759606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.323 [2024-05-14 23:51:53.760305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.760960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.761969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.323 [2024-05-14 23:51:53.762966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.763971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.764969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.765992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.766605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.767965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.324 [2024-05-14 23:51:53.768319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.768979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.769997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.770952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.771963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.772984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.773035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.773083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.773133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.773180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.325 [2024-05-14 23:51:53.773235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.773585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.774988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.775970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.776983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.777995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.326 [2024-05-14 23:51:53.778538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.778963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.779995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.780544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.781997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.782039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.782079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.782123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.327 [2024-05-14 23:51:53.782167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.782997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.783980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.784970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.785976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.328 [2024-05-14 23:51:53.786989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.787989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.788998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.789969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.790868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.791971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.329 [2024-05-14 23:51:53.792523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.792964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.793992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.794986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.795972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.796988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.330 [2024-05-14 23:51:53.797707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.797758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.798983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.799985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.800974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.801970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.802974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.331 [2024-05-14 23:51:53.803309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.803963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.804630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.805969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.806973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.807955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.332 [2024-05-14 23:51:53.808727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.808987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.809955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.810965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.811606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.812990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.813969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.333 [2024-05-14 23:51:53.814243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.814979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.334 [2024-05-14 23:51:53.815643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.815985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.816954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.817971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.818584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.819963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.334 [2024-05-14 23:51:53.820493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.820965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.821964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.822966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.823971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.824964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.825525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.826069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.826118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.826168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.335 [2024-05-14 23:51:53.826219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.826994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.827960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.828992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.829962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.336 [2024-05-14 23:51:53.830982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.831961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.832568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.833983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.834979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.835996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.337 [2024-05-14 23:51:53.836984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.837981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.838984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.839991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.840966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.841962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.338 [2024-05-14 23:51:53.842960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.843997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.844965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.845985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.846956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.847991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.848958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.339 [2024-05-14 23:51:53.849841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.850956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.851969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.852965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.853974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.854972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.855978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.340 [2024-05-14 23:51:53.856337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.856842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.857965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.858985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.859965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.860977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.861953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.341 [2024-05-14 23:51:53.862675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.862990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.863626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.864971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.865999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.866987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.342 [2024-05-14 23:51:53.867896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.867991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.868964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.342 [2024-05-14 23:51:53.869544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.869969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.870965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.871970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.872971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.873714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.874970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.343 [2024-05-14 23:51:53.875649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.875984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.876960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.877977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.878974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.879970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.880961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.881955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.882009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.344 [2024-05-14 23:51:53.882058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 true 00:09:53.345 [2024-05-14 23:51:53.882202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.882970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.883925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.345 [2024-05-14 23:51:53.884681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.884975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.649 [2024-05-14 23:51:53.885822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.885873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.885926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.885978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.886981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.887420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.888966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.889998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.650 [2024-05-14 23:51:53.890638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.890983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.891971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.892970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.893986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.894963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.651 [2024-05-14 23:51:53.895919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.895971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.896992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.897924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.898996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.899977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.900983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.652 [2024-05-14 23:51:53.901034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.901965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.902966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.903984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.904836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:53.653 23:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.653 [2024-05-14 23:51:53.905369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.905983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.653 [2024-05-14 23:51:53.906582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.906978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.907970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.908966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.909992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.654 [2024-05-14 23:51:53.910921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.910964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.911621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.912968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.913966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.914976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.915967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.655 [2024-05-14 23:51:53.916327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.916970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.917990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.918959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.919988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.920955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.656 [2024-05-14 23:51:53.921503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.921845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.657 [2024-05-14 23:51:53.922641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.922982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.923956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.924980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.925972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.926987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.657 [2024-05-14 23:51:53.927285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.927987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.928690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.929982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.930992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.658 [2024-05-14 23:51:53.931662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.931992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.932973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.933985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.934970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.935590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.936952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.659 [2024-05-14 23:51:53.937411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.937976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.938981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.939990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.940999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.941965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.660 [2024-05-14 23:51:53.942576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.943968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.944972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.945986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.946998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.947962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.661 [2024-05-14 23:51:53.948320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.948977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.949999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.950970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.951966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.952987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.662 [2024-05-14 23:51:53.953912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.953946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.953994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.954967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.955998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.956967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.957953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.663 [2024-05-14 23:51:53.958411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.958992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.959774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.960970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.961979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.962988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.963980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.964023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.964066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.664 [2024-05-14 23:51:53.964120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.964956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.965963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.966678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.967969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.968983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.969031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.969083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.969133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.969184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.665 [2024-05-14 23:51:53.969234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.969980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.970997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.971994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.972975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.973630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.974980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.975012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.975056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.975103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.666 [2024-05-14 23:51:53.975146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.975966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.976989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.667 [2024-05-14 23:51:53.977636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.977980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.978996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.979983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.980027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.980072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.667 [2024-05-14 23:51:53.980116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.980537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.981978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.982988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.983941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.984972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.985957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.986011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.986054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.668 [2024-05-14 23:51:53.986096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.986996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.987486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.988982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.989991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.990869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.991984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.669 [2024-05-14 23:51:53.992262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.992971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.993972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.994990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.995996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.996939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.997965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.670 [2024-05-14 23:51:53.998012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.998979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:53.999989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.000703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.001969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.002984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.003028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.003074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.671 [2024-05-14 23:51:54.003127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.003992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.004965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.005961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.006967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.007455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.008970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.672 [2024-05-14 23:51:54.009262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.009950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.010921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.011988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.012991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.013962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.014954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.673 [2024-05-14 23:51:54.015659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.015998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.016997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.017787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.018968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.019981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.020986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.021970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.674 [2024-05-14 23:51:54.022577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.022977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.023971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.024831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.025984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.026963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.675 [2024-05-14 23:51:54.027803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.027850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.027895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.027943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.027991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.028505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.029986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.030981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.031933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.676 [2024-05-14 23:51:54.032554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.032986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.033987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.676 [2024-05-14 23:51:54.034977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.035535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.036965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.037995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.038985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.039993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.040994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.677 [2024-05-14 23:51:54.041844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.041886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.041935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.041984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.042972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.043991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.044985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.045910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.046982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.047952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.048002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.678 [2024-05-14 23:51:54.048055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.048998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.049989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.050996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.051959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.052853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.053997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.054970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.679 [2024-05-14 23:51:54.055476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.055961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 [2024-05-14 23:51:54.056337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.680 23:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.960 [2024-05-14 23:51:54.275977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.276954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.277990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.278889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.279993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.280993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.281039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.960 [2024-05-14 23:51:54.281079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.281860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.282998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.283969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.284988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.285997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.286969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.287979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.288032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.288079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.288128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.961 [2024-05-14 23:51:54.288181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.288530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.289977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.290983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.291955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.292003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.292833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.292880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.292921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.292959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.293961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.294959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.962 [2024-05-14 23:51:54.295907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.295951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.295997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.296579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.297958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.298962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.299991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.300986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.301979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.963 [2024-05-14 23:51:54.302027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.302975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.303961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.304990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.305984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.306822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 23:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:53.964 [2024-05-14 23:51:54.307710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.307995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 23:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:53.964 [2024-05-14 23:51:54.308092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.308999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.964 [2024-05-14 23:51:54.309653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.309993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.310969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.311969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.312973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.313585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.314954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.315963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.965 [2024-05-14 23:51:54.316828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.316875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.316918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.316965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.317970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.318976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.319961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.320507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.966 [2024-05-14 23:51:54.321022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.321954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.322999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.323042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.323083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.323124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.966 [2024-05-14 23:51:54.323164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.323860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.324986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.325961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.326968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.327989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.328964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.329968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.967 [2024-05-14 23:51:54.330421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.330468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.330507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.330547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.330587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.330761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.331972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.332988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.333831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.334983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.335966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.336975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.337958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.338011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.338056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.968 [2024-05-14 23:51:54.338104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.338996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.339972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.340628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.341992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.342989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.343963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.344008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.344053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.344096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.969 [2024-05-14 23:51:54.344139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.344959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.345957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.346958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.347625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.348969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.349962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.350957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.351968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.352013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.352053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.352099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.970 [2024-05-14 23:51:54.352142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.352972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.353977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.354599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.355965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.356970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.357985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.358961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.971 [2024-05-14 23:51:54.359518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.359980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.360972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.361578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.362965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.363972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.364988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.365994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.366974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.367025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.972 [2024-05-14 23:51:54.367076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.367968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.368582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.369995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.370987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.371948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.973 [2024-05-14 23:51:54.372956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.372995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.973 [2024-05-14 23:51:54.373466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.373954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.374975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.375966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.376953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.377996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.378968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.379952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.974 [2024-05-14 23:51:54.380716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.380974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.381957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.382972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.383996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.384969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.385833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.386969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.387948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.388002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.388049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.975 [2024-05-14 23:51:54.388093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.388964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.389970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.390989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.391962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.392652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.393967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.976 [2024-05-14 23:51:54.394499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.394984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.395992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.396981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.397965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.398985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.399985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.400954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.401968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.977 [2024-05-14 23:51:54.402011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.402835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.403970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.404974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.405952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.406973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.407996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.408953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.978 [2024-05-14 23:51:54.409352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.409701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.410993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.411987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.412962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.413967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.414967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.979 [2024-05-14 23:51:54.415615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.415951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.416536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.417994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.418998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.419967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.420994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.421965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.422979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.980 [2024-05-14 23:51:54.423024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.423997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.424963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.425959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.426830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.981 [2024-05-14 23:51:54.427784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.427984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.428965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.429980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.981 [2024-05-14 23:51:54.430698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.430972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.431975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.432969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.433579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.434976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.435989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.436981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.437505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.437552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.982 [2024-05-14 23:51:54.437595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.437967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.438991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.439998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.440973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.441960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.442980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.983 [2024-05-14 23:51:54.443883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.444954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.445996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.446981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.447956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.448957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.449998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.450750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.984 [2024-05-14 23:51:54.451651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.451960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.452997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.453955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.454987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.455981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.456960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.457500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.985 [2024-05-14 23:51:54.458817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.458858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.458911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.458958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.459957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.460943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.461992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.462975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.463966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.464977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 true 00:09:53.986 [2024-05-14 23:51:54.465018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.986 [2024-05-14 23:51:54.465316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.465988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.466971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.467660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.468989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.469993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.470991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.471975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.987 [2024-05-14 23:51:54.472521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.472981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.473990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.474975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.475989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.476966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 23:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:53.988 [2024-05-14 23:51:54.477013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 23:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.988 [2024-05-14 23:51:54.477396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.477871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.478968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.988 [2024-05-14 23:51:54.479797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.479842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.479888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.479934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.479977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.480996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:53.989 [2024-05-14 23:51:54.481937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.481979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.482956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.483955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.484645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.485962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.989 [2024-05-14 23:51:54.486395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.486973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.487997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.488966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.489997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.490992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.491516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.492980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.990 [2024-05-14 23:51:54.493651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.493996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.494963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.495964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.496966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.497994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.498962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.499969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.991 [2024-05-14 23:51:54.500409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.500967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.501798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.502969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.503987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.992 [2024-05-14 23:51:54.504693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.504972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.505990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.506984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.507973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.508613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.993 [2024-05-14 23:51:54.509673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.509963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.510959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.511997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.512977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.513995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.994 [2024-05-14 23:51:54.514734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.514783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.514830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.514876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.514927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.514986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.515492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.516985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.517973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.518846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.519970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.995 [2024-05-14 23:51:54.520303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.520976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.521964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.522966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.523978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.524992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.996 [2024-05-14 23:51:54.525440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.525484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.525529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.525564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.525608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.525646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.526979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.527972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.528985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.997 [2024-05-14 23:51:54.529931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.529978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:53.998 [2024-05-14 23:51:54.530645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.530995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.531953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.532643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:54.257 [2024-05-14 23:51:54.533156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 23:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.194 23:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:55.194 23:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:55.452 true 00:09:55.452 23:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:55.452 23:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.390 23:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.390 23:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:56.390 23:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:56.648 true 00:09:56.648 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:56.648 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.907 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.166 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:57.166 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:57.166 true 00:09:57.166 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:57.166 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.424 23:51:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.683 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:57.683 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:57.683 true 00:09:57.683 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:57.683 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.942 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.201 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:58.201 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:58.201 true 00:09:58.201 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:58.201 23:51:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 23:51:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.578 23:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:59.578 23:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:59.837 true 00:09:59.837 23:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:09:59.837 23:52:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.774 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.774 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:00.774 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:01.034 true 00:10:01.034 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:01.034 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.292 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.292 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:01.292 23:52:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:01.551 true 00:10:01.551 23:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:01.551 23:52:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.934 23:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.934 23:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:02.934 23:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:03.192 true 00:10:03.192 23:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:03.192 23:52:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.127 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.127 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:04.127 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:04.127 true 00:10:04.385 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:04.385 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.385 23:52:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.644 23:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:04.644 23:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:04.903 true 00:10:04.903 23:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:04.903 23:52:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.840 Initializing NVMe Controllers 00:10:05.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.840 Controller IO queue size 128, less than required. 00:10:05.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.840 Controller IO queue size 128, less than required. 00:10:05.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:05.840 Initialization complete. Launching workers. 00:10:05.840 ======================================================== 00:10:05.840 Latency(us) 00:10:05.840 Device Information : IOPS MiB/s Average min max 00:10:05.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2979.63 1.45 24465.58 1751.02 1028035.67 00:10:05.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13972.83 6.82 9160.66 1881.22 357566.38 00:10:05.840 ======================================================== 00:10:05.840 Total : 16952.47 8.28 11850.71 1751.02 1028035.67 00:10:05.840 00:10:05.840 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.100 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:06.100 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:06.360 true 00:10:06.360 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3473723 00:10:06.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3473723) - No such process 00:10:06.360 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3473723 00:10:06.360 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.360 23:52:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.622 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:06.622 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:06.622 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:06.622 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.622 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:06.889 null0 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:06.889 null1 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.889 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:07.148 null2 00:10:07.148 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.148 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.148 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:07.407 null3 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:07.407 null4 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.407 23:52:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:07.667 null5 00:10:07.667 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.667 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.667 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:07.926 null6 00:10:07.926 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.926 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.926 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:07.926 null7 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3479656 3479657 3479659 3479661 3479663 3479665 3479667 3479668 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.187 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.188 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.188 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.447 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.448 23:52:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.448 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.707 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.708 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.968 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.227 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.486 23:52:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.486 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.486 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.486 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.487 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.746 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.006 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.266 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.267 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.528 23:52:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.528 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.788 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.789 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.048 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.049 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.308 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.309 23:52:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.568 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.828 rmmod nvme_tcp 00:10:11.828 rmmod nvme_fabrics 00:10:11.828 rmmod nvme_keyring 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3473397 ']' 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3473397 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3473397 ']' 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3473397 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3473397 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3473397' 00:10:11.828 killing process with pid 3473397 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3473397 00:10:11.828 [2024-05-14 23:52:12.372939] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:11.828 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3473397 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.086 23:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.624 23:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.624 00:10:14.624 real 0m48.522s 00:10:14.624 user 3m9.374s 00:10:14.624 sys 0m21.329s 00:10:14.624 23:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.624 23:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 ************************************ 00:10:14.624 END TEST nvmf_ns_hotplug_stress 00:10:14.624 ************************************ 00:10:14.624 23:52:14 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:14.624 23:52:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:14.624 23:52:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.624 23:52:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 ************************************ 00:10:14.624 START TEST nvmf_connect_stress 00:10:14.624 ************************************ 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:14.624 * Looking for test storage... 00:10:14.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.624 23:52:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.625 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.625 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.625 23:52:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.625 23:52:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.200 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.200 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.200 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.200 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.200 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:10:21.201 00:10:21.201 --- 10.0.0.2 ping statistics --- 00:10:21.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.201 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:21.201 00:10:21.201 --- 10.0.0.1 ping statistics --- 00:10:21.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.201 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3484079 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3484079 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3484079 ']' 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:21.201 23:52:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.461 [2024-05-14 23:52:21.813909] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:21.461 [2024-05-14 23:52:21.813961] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.461 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.461 [2024-05-14 23:52:21.887739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.461 [2024-05-14 23:52:21.960905] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.461 [2024-05-14 23:52:21.960939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.461 [2024-05-14 23:52:21.960949] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.461 [2024-05-14 23:52:21.960957] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.461 [2024-05-14 23:52:21.960964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.461 [2024-05-14 23:52:21.961065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.461 [2024-05-14 23:52:21.961165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.461 [2024-05-14 23:52:21.961168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 [2024-05-14 23:52:22.673480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 [2024-05-14 23:52:22.697916] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:22.400 [2024-05-14 23:52:22.698122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.400 NULL1 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3484354 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.400 23:52:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.660 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.660 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:22.660 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.660 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.660 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.919 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.919 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:22.919 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.919 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.919 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.488 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.488 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:23.488 23:52:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.489 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.489 23:52:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.748 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.748 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:23.748 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.748 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.748 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.007 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.007 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:24.007 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.007 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.007 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.268 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.268 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:24.268 23:52:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.268 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.268 23:52:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.564 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.564 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:24.564 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.564 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.564 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.823 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.823 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:24.823 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.823 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.823 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.391 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.391 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:25.391 23:52:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.391 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.391 23:52:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.649 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.649 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:25.649 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.649 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.649 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.908 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.908 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:25.908 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.908 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.908 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.166 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.166 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:26.166 23:52:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.166 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.166 23:52:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.732 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.732 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:26.732 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.732 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.732 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.993 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.993 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:26.993 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.993 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.993 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.252 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.252 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:27.252 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.252 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.252 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.511 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.512 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:27.512 23:52:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.512 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.512 23:52:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.771 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.771 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:27.771 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.771 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.771 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.339 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.339 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:28.339 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.339 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.339 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.598 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.598 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:28.598 23:52:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.598 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.598 23:52:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.857 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.857 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:28.857 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.857 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.857 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.116 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.116 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:29.116 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.116 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.116 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.375 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.375 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:29.375 23:52:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.375 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.375 23:52:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.942 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:29.942 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.942 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.942 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.201 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:30.201 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.201 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.201 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.460 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.460 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:30.460 23:52:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.460 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.460 23:52:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.719 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.719 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:30.719 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.719 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.719 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.978 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.978 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:30.978 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.978 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.978 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.546 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.546 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:31.546 23:52:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.546 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.546 23:52:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.806 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.806 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:31.806 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.806 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.806 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.065 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.065 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:32.065 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.065 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.065 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.324 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3484354 00:10:32.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3484354) - No such process 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3484354 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.324 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.324 rmmod nvme_tcp 00:10:32.324 rmmod nvme_fabrics 00:10:32.325 rmmod nvme_keyring 00:10:32.325 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3484079 ']' 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3484079 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3484079 ']' 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3484079 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3484079 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3484079' 00:10:32.584 killing process with pid 3484079 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3484079 00:10:32.584 [2024-05-14 23:52:32.979020] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:32.584 23:52:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3484079 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.843 23:52:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.750 23:52:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.750 00:10:34.750 real 0m20.521s 00:10:34.750 user 0m40.367s 00:10:34.750 sys 0m10.245s 00:10:34.750 23:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:34.750 23:52:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.750 ************************************ 00:10:34.750 END TEST nvmf_connect_stress 00:10:34.750 ************************************ 00:10:34.750 23:52:35 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:34.750 23:52:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:34.750 23:52:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:34.750 23:52:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.010 ************************************ 00:10:35.010 START TEST nvmf_fused_ordering 00:10:35.010 ************************************ 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:35.010 * Looking for test storage... 00:10:35.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.010 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.011 23:52:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.584 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.584 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.584 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:10:41.585 00:10:41.585 --- 10.0.0.2 ping statistics --- 00:10:41.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.585 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:10:41.585 00:10:41.585 --- 10.0.0.1 ping statistics --- 00:10:41.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.585 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3489664 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3489664 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3489664 ']' 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:41.585 23:52:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.585 [2024-05-14 23:52:41.816664] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:41.585 [2024-05-14 23:52:41.816711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.585 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.585 [2024-05-14 23:52:41.890570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.585 [2024-05-14 23:52:41.962588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.585 [2024-05-14 23:52:41.962622] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.585 [2024-05-14 23:52:41.962632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.585 [2024-05-14 23:52:41.962640] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.585 [2024-05-14 23:52:41.962648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.585 [2024-05-14 23:52:41.962672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.187 [2024-05-14 23:52:42.681300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.187 [2024-05-14 23:52:42.697291] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:42.187 [2024-05-14 23:52:42.697478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.187 NULL1 00:10:42.187 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.188 23:52:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:42.188 [2024-05-14 23:52:42.753730] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:42.188 [2024-05-14 23:52:42.753767] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489945 ] 00:10:42.447 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.015 Attached to nqn.2016-06.io.spdk:cnode1 00:10:43.015 Namespace ID: 1 size: 1GB 00:10:43.015 fused_ordering(0) 00:10:43.015 fused_ordering(1) 00:10:43.015 fused_ordering(2) 00:10:43.015 fused_ordering(3) 00:10:43.015 fused_ordering(4) 00:10:43.015 fused_ordering(5) 00:10:43.015 fused_ordering(6) 00:10:43.015 fused_ordering(7) 00:10:43.015 fused_ordering(8) 00:10:43.015 fused_ordering(9) 00:10:43.015 fused_ordering(10) 00:10:43.015 fused_ordering(11) 00:10:43.015 fused_ordering(12) 00:10:43.015 fused_ordering(13) 00:10:43.015 fused_ordering(14) 00:10:43.015 fused_ordering(15) 00:10:43.015 fused_ordering(16) 00:10:43.015 fused_ordering(17) 00:10:43.015 fused_ordering(18) 00:10:43.015 fused_ordering(19) 00:10:43.015 fused_ordering(20) 00:10:43.015 fused_ordering(21) 00:10:43.015 fused_ordering(22) 00:10:43.015 fused_ordering(23) 00:10:43.015 fused_ordering(24) 00:10:43.015 fused_ordering(25) 00:10:43.015 fused_ordering(26) 00:10:43.015 fused_ordering(27) 00:10:43.015 fused_ordering(28) 00:10:43.015 fused_ordering(29) 00:10:43.015 fused_ordering(30) 00:10:43.015 fused_ordering(31) 00:10:43.015 fused_ordering(32) 00:10:43.015 fused_ordering(33) 00:10:43.015 fused_ordering(34) 00:10:43.016 fused_ordering(35) 00:10:43.016 fused_ordering(36) 00:10:43.016 fused_ordering(37) 00:10:43.016 fused_ordering(38) 00:10:43.016 fused_ordering(39) 00:10:43.016 fused_ordering(40) 00:10:43.016 fused_ordering(41) 00:10:43.016 fused_ordering(42) 00:10:43.016 fused_ordering(43) 00:10:43.016 fused_ordering(44) 00:10:43.016 fused_ordering(45) 00:10:43.016 fused_ordering(46) 00:10:43.016 fused_ordering(47) 00:10:43.016 fused_ordering(48) 00:10:43.016 fused_ordering(49) 00:10:43.016 fused_ordering(50) 00:10:43.016 fused_ordering(51) 00:10:43.016 fused_ordering(52) 00:10:43.016 fused_ordering(53) 00:10:43.016 fused_ordering(54) 00:10:43.016 fused_ordering(55) 00:10:43.016 fused_ordering(56) 00:10:43.016 fused_ordering(57) 00:10:43.016 fused_ordering(58) 00:10:43.016 fused_ordering(59) 00:10:43.016 fused_ordering(60) 00:10:43.016 fused_ordering(61) 00:10:43.016 fused_ordering(62) 00:10:43.016 fused_ordering(63) 00:10:43.016 fused_ordering(64) 00:10:43.016 fused_ordering(65) 00:10:43.016 fused_ordering(66) 00:10:43.016 fused_ordering(67) 00:10:43.016 fused_ordering(68) 00:10:43.016 fused_ordering(69) 00:10:43.016 fused_ordering(70) 00:10:43.016 fused_ordering(71) 00:10:43.016 fused_ordering(72) 00:10:43.016 fused_ordering(73) 00:10:43.016 fused_ordering(74) 00:10:43.016 fused_ordering(75) 00:10:43.016 fused_ordering(76) 00:10:43.016 fused_ordering(77) 00:10:43.016 fused_ordering(78) 00:10:43.016 fused_ordering(79) 00:10:43.016 fused_ordering(80) 00:10:43.016 fused_ordering(81) 00:10:43.016 fused_ordering(82) 00:10:43.016 fused_ordering(83) 00:10:43.016 fused_ordering(84) 00:10:43.016 fused_ordering(85) 00:10:43.016 fused_ordering(86) 00:10:43.016 fused_ordering(87) 00:10:43.016 fused_ordering(88) 00:10:43.016 fused_ordering(89) 00:10:43.016 fused_ordering(90) 00:10:43.016 fused_ordering(91) 00:10:43.016 fused_ordering(92) 00:10:43.016 fused_ordering(93) 00:10:43.016 fused_ordering(94) 00:10:43.016 fused_ordering(95) 00:10:43.016 fused_ordering(96) 00:10:43.016 fused_ordering(97) 00:10:43.016 fused_ordering(98) 00:10:43.016 fused_ordering(99) 00:10:43.016 fused_ordering(100) 00:10:43.016 fused_ordering(101) 00:10:43.016 fused_ordering(102) 00:10:43.016 fused_ordering(103) 00:10:43.016 fused_ordering(104) 00:10:43.016 fused_ordering(105) 00:10:43.016 fused_ordering(106) 00:10:43.016 fused_ordering(107) 00:10:43.016 fused_ordering(108) 00:10:43.016 fused_ordering(109) 00:10:43.016 fused_ordering(110) 00:10:43.016 fused_ordering(111) 00:10:43.016 fused_ordering(112) 00:10:43.016 fused_ordering(113) 00:10:43.016 fused_ordering(114) 00:10:43.016 fused_ordering(115) 00:10:43.016 fused_ordering(116) 00:10:43.016 fused_ordering(117) 00:10:43.016 fused_ordering(118) 00:10:43.016 fused_ordering(119) 00:10:43.016 fused_ordering(120) 00:10:43.016 fused_ordering(121) 00:10:43.016 fused_ordering(122) 00:10:43.016 fused_ordering(123) 00:10:43.016 fused_ordering(124) 00:10:43.016 fused_ordering(125) 00:10:43.016 fused_ordering(126) 00:10:43.016 fused_ordering(127) 00:10:43.016 fused_ordering(128) 00:10:43.016 fused_ordering(129) 00:10:43.016 fused_ordering(130) 00:10:43.016 fused_ordering(131) 00:10:43.016 fused_ordering(132) 00:10:43.016 fused_ordering(133) 00:10:43.016 fused_ordering(134) 00:10:43.016 fused_ordering(135) 00:10:43.016 fused_ordering(136) 00:10:43.016 fused_ordering(137) 00:10:43.016 fused_ordering(138) 00:10:43.016 fused_ordering(139) 00:10:43.016 fused_ordering(140) 00:10:43.016 fused_ordering(141) 00:10:43.016 fused_ordering(142) 00:10:43.016 fused_ordering(143) 00:10:43.016 fused_ordering(144) 00:10:43.016 fused_ordering(145) 00:10:43.016 fused_ordering(146) 00:10:43.016 fused_ordering(147) 00:10:43.016 fused_ordering(148) 00:10:43.016 fused_ordering(149) 00:10:43.016 fused_ordering(150) 00:10:43.016 fused_ordering(151) 00:10:43.016 fused_ordering(152) 00:10:43.016 fused_ordering(153) 00:10:43.016 fused_ordering(154) 00:10:43.016 fused_ordering(155) 00:10:43.016 fused_ordering(156) 00:10:43.016 fused_ordering(157) 00:10:43.016 fused_ordering(158) 00:10:43.016 fused_ordering(159) 00:10:43.016 fused_ordering(160) 00:10:43.016 fused_ordering(161) 00:10:43.016 fused_ordering(162) 00:10:43.016 fused_ordering(163) 00:10:43.016 fused_ordering(164) 00:10:43.016 fused_ordering(165) 00:10:43.016 fused_ordering(166) 00:10:43.016 fused_ordering(167) 00:10:43.016 fused_ordering(168) 00:10:43.016 fused_ordering(169) 00:10:43.016 fused_ordering(170) 00:10:43.016 fused_ordering(171) 00:10:43.016 fused_ordering(172) 00:10:43.016 fused_ordering(173) 00:10:43.016 fused_ordering(174) 00:10:43.016 fused_ordering(175) 00:10:43.016 fused_ordering(176) 00:10:43.016 fused_ordering(177) 00:10:43.016 fused_ordering(178) 00:10:43.016 fused_ordering(179) 00:10:43.016 fused_ordering(180) 00:10:43.016 fused_ordering(181) 00:10:43.016 fused_ordering(182) 00:10:43.016 fused_ordering(183) 00:10:43.016 fused_ordering(184) 00:10:43.016 fused_ordering(185) 00:10:43.016 fused_ordering(186) 00:10:43.016 fused_ordering(187) 00:10:43.016 fused_ordering(188) 00:10:43.016 fused_ordering(189) 00:10:43.016 fused_ordering(190) 00:10:43.016 fused_ordering(191) 00:10:43.016 fused_ordering(192) 00:10:43.016 fused_ordering(193) 00:10:43.016 fused_ordering(194) 00:10:43.016 fused_ordering(195) 00:10:43.016 fused_ordering(196) 00:10:43.016 fused_ordering(197) 00:10:43.016 fused_ordering(198) 00:10:43.016 fused_ordering(199) 00:10:43.016 fused_ordering(200) 00:10:43.016 fused_ordering(201) 00:10:43.016 fused_ordering(202) 00:10:43.016 fused_ordering(203) 00:10:43.016 fused_ordering(204) 00:10:43.016 fused_ordering(205) 00:10:43.584 fused_ordering(206) 00:10:43.584 fused_ordering(207) 00:10:43.584 fused_ordering(208) 00:10:43.584 fused_ordering(209) 00:10:43.584 fused_ordering(210) 00:10:43.584 fused_ordering(211) 00:10:43.584 fused_ordering(212) 00:10:43.584 fused_ordering(213) 00:10:43.584 fused_ordering(214) 00:10:43.584 fused_ordering(215) 00:10:43.584 fused_ordering(216) 00:10:43.584 fused_ordering(217) 00:10:43.584 fused_ordering(218) 00:10:43.584 fused_ordering(219) 00:10:43.584 fused_ordering(220) 00:10:43.584 fused_ordering(221) 00:10:43.584 fused_ordering(222) 00:10:43.584 fused_ordering(223) 00:10:43.584 fused_ordering(224) 00:10:43.584 fused_ordering(225) 00:10:43.584 fused_ordering(226) 00:10:43.584 fused_ordering(227) 00:10:43.584 fused_ordering(228) 00:10:43.584 fused_ordering(229) 00:10:43.584 fused_ordering(230) 00:10:43.584 fused_ordering(231) 00:10:43.584 fused_ordering(232) 00:10:43.584 fused_ordering(233) 00:10:43.584 fused_ordering(234) 00:10:43.584 fused_ordering(235) 00:10:43.584 fused_ordering(236) 00:10:43.584 fused_ordering(237) 00:10:43.584 fused_ordering(238) 00:10:43.584 fused_ordering(239) 00:10:43.584 fused_ordering(240) 00:10:43.584 fused_ordering(241) 00:10:43.584 fused_ordering(242) 00:10:43.584 fused_ordering(243) 00:10:43.584 fused_ordering(244) 00:10:43.584 fused_ordering(245) 00:10:43.584 fused_ordering(246) 00:10:43.584 fused_ordering(247) 00:10:43.584 fused_ordering(248) 00:10:43.584 fused_ordering(249) 00:10:43.584 fused_ordering(250) 00:10:43.584 fused_ordering(251) 00:10:43.584 fused_ordering(252) 00:10:43.584 fused_ordering(253) 00:10:43.584 fused_ordering(254) 00:10:43.584 fused_ordering(255) 00:10:43.584 fused_ordering(256) 00:10:43.584 fused_ordering(257) 00:10:43.584 fused_ordering(258) 00:10:43.584 fused_ordering(259) 00:10:43.584 fused_ordering(260) 00:10:43.584 fused_ordering(261) 00:10:43.584 fused_ordering(262) 00:10:43.584 fused_ordering(263) 00:10:43.584 fused_ordering(264) 00:10:43.584 fused_ordering(265) 00:10:43.584 fused_ordering(266) 00:10:43.584 fused_ordering(267) 00:10:43.584 fused_ordering(268) 00:10:43.584 fused_ordering(269) 00:10:43.584 fused_ordering(270) 00:10:43.584 fused_ordering(271) 00:10:43.584 fused_ordering(272) 00:10:43.584 fused_ordering(273) 00:10:43.584 fused_ordering(274) 00:10:43.584 fused_ordering(275) 00:10:43.584 fused_ordering(276) 00:10:43.584 fused_ordering(277) 00:10:43.584 fused_ordering(278) 00:10:43.584 fused_ordering(279) 00:10:43.584 fused_ordering(280) 00:10:43.584 fused_ordering(281) 00:10:43.584 fused_ordering(282) 00:10:43.584 fused_ordering(283) 00:10:43.584 fused_ordering(284) 00:10:43.584 fused_ordering(285) 00:10:43.584 fused_ordering(286) 00:10:43.584 fused_ordering(287) 00:10:43.584 fused_ordering(288) 00:10:43.584 fused_ordering(289) 00:10:43.584 fused_ordering(290) 00:10:43.584 fused_ordering(291) 00:10:43.584 fused_ordering(292) 00:10:43.584 fused_ordering(293) 00:10:43.584 fused_ordering(294) 00:10:43.584 fused_ordering(295) 00:10:43.584 fused_ordering(296) 00:10:43.584 fused_ordering(297) 00:10:43.584 fused_ordering(298) 00:10:43.584 fused_ordering(299) 00:10:43.584 fused_ordering(300) 00:10:43.584 fused_ordering(301) 00:10:43.584 fused_ordering(302) 00:10:43.584 fused_ordering(303) 00:10:43.584 fused_ordering(304) 00:10:43.584 fused_ordering(305) 00:10:43.584 fused_ordering(306) 00:10:43.584 fused_ordering(307) 00:10:43.584 fused_ordering(308) 00:10:43.584 fused_ordering(309) 00:10:43.584 fused_ordering(310) 00:10:43.584 fused_ordering(311) 00:10:43.584 fused_ordering(312) 00:10:43.584 fused_ordering(313) 00:10:43.584 fused_ordering(314) 00:10:43.584 fused_ordering(315) 00:10:43.584 fused_ordering(316) 00:10:43.584 fused_ordering(317) 00:10:43.584 fused_ordering(318) 00:10:43.584 fused_ordering(319) 00:10:43.584 fused_ordering(320) 00:10:43.584 fused_ordering(321) 00:10:43.584 fused_ordering(322) 00:10:43.584 fused_ordering(323) 00:10:43.584 fused_ordering(324) 00:10:43.584 fused_ordering(325) 00:10:43.584 fused_ordering(326) 00:10:43.584 fused_ordering(327) 00:10:43.584 fused_ordering(328) 00:10:43.584 fused_ordering(329) 00:10:43.584 fused_ordering(330) 00:10:43.584 fused_ordering(331) 00:10:43.584 fused_ordering(332) 00:10:43.584 fused_ordering(333) 00:10:43.584 fused_ordering(334) 00:10:43.584 fused_ordering(335) 00:10:43.584 fused_ordering(336) 00:10:43.584 fused_ordering(337) 00:10:43.584 fused_ordering(338) 00:10:43.585 fused_ordering(339) 00:10:43.585 fused_ordering(340) 00:10:43.585 fused_ordering(341) 00:10:43.585 fused_ordering(342) 00:10:43.585 fused_ordering(343) 00:10:43.585 fused_ordering(344) 00:10:43.585 fused_ordering(345) 00:10:43.585 fused_ordering(346) 00:10:43.585 fused_ordering(347) 00:10:43.585 fused_ordering(348) 00:10:43.585 fused_ordering(349) 00:10:43.585 fused_ordering(350) 00:10:43.585 fused_ordering(351) 00:10:43.585 fused_ordering(352) 00:10:43.585 fused_ordering(353) 00:10:43.585 fused_ordering(354) 00:10:43.585 fused_ordering(355) 00:10:43.585 fused_ordering(356) 00:10:43.585 fused_ordering(357) 00:10:43.585 fused_ordering(358) 00:10:43.585 fused_ordering(359) 00:10:43.585 fused_ordering(360) 00:10:43.585 fused_ordering(361) 00:10:43.585 fused_ordering(362) 00:10:43.585 fused_ordering(363) 00:10:43.585 fused_ordering(364) 00:10:43.585 fused_ordering(365) 00:10:43.585 fused_ordering(366) 00:10:43.585 fused_ordering(367) 00:10:43.585 fused_ordering(368) 00:10:43.585 fused_ordering(369) 00:10:43.585 fused_ordering(370) 00:10:43.585 fused_ordering(371) 00:10:43.585 fused_ordering(372) 00:10:43.585 fused_ordering(373) 00:10:43.585 fused_ordering(374) 00:10:43.585 fused_ordering(375) 00:10:43.585 fused_ordering(376) 00:10:43.585 fused_ordering(377) 00:10:43.585 fused_ordering(378) 00:10:43.585 fused_ordering(379) 00:10:43.585 fused_ordering(380) 00:10:43.585 fused_ordering(381) 00:10:43.585 fused_ordering(382) 00:10:43.585 fused_ordering(383) 00:10:43.585 fused_ordering(384) 00:10:43.585 fused_ordering(385) 00:10:43.585 fused_ordering(386) 00:10:43.585 fused_ordering(387) 00:10:43.585 fused_ordering(388) 00:10:43.585 fused_ordering(389) 00:10:43.585 fused_ordering(390) 00:10:43.585 fused_ordering(391) 00:10:43.585 fused_ordering(392) 00:10:43.585 fused_ordering(393) 00:10:43.585 fused_ordering(394) 00:10:43.585 fused_ordering(395) 00:10:43.585 fused_ordering(396) 00:10:43.585 fused_ordering(397) 00:10:43.585 fused_ordering(398) 00:10:43.585 fused_ordering(399) 00:10:43.585 fused_ordering(400) 00:10:43.585 fused_ordering(401) 00:10:43.585 fused_ordering(402) 00:10:43.585 fused_ordering(403) 00:10:43.585 fused_ordering(404) 00:10:43.585 fused_ordering(405) 00:10:43.585 fused_ordering(406) 00:10:43.585 fused_ordering(407) 00:10:43.585 fused_ordering(408) 00:10:43.585 fused_ordering(409) 00:10:43.585 fused_ordering(410) 00:10:44.522 fused_ordering(411) 00:10:44.522 fused_ordering(412) 00:10:44.522 fused_ordering(413) 00:10:44.522 fused_ordering(414) 00:10:44.522 fused_ordering(415) 00:10:44.522 fused_ordering(416) 00:10:44.522 fused_ordering(417) 00:10:44.522 fused_ordering(418) 00:10:44.522 fused_ordering(419) 00:10:44.522 fused_ordering(420) 00:10:44.522 fused_ordering(421) 00:10:44.522 fused_ordering(422) 00:10:44.522 fused_ordering(423) 00:10:44.522 fused_ordering(424) 00:10:44.522 fused_ordering(425) 00:10:44.522 fused_ordering(426) 00:10:44.522 fused_ordering(427) 00:10:44.522 fused_ordering(428) 00:10:44.522 fused_ordering(429) 00:10:44.522 fused_ordering(430) 00:10:44.522 fused_ordering(431) 00:10:44.522 fused_ordering(432) 00:10:44.522 fused_ordering(433) 00:10:44.522 fused_ordering(434) 00:10:44.522 fused_ordering(435) 00:10:44.522 fused_ordering(436) 00:10:44.522 fused_ordering(437) 00:10:44.522 fused_ordering(438) 00:10:44.522 fused_ordering(439) 00:10:44.522 fused_ordering(440) 00:10:44.522 fused_ordering(441) 00:10:44.522 fused_ordering(442) 00:10:44.522 fused_ordering(443) 00:10:44.522 fused_ordering(444) 00:10:44.522 fused_ordering(445) 00:10:44.523 fused_ordering(446) 00:10:44.523 fused_ordering(447) 00:10:44.523 fused_ordering(448) 00:10:44.523 fused_ordering(449) 00:10:44.523 fused_ordering(450) 00:10:44.523 fused_ordering(451) 00:10:44.523 fused_ordering(452) 00:10:44.523 fused_ordering(453) 00:10:44.523 fused_ordering(454) 00:10:44.523 fused_ordering(455) 00:10:44.523 fused_ordering(456) 00:10:44.523 fused_ordering(457) 00:10:44.523 fused_ordering(458) 00:10:44.523 fused_ordering(459) 00:10:44.523 fused_ordering(460) 00:10:44.523 fused_ordering(461) 00:10:44.523 fused_ordering(462) 00:10:44.523 fused_ordering(463) 00:10:44.523 fused_ordering(464) 00:10:44.523 fused_ordering(465) 00:10:44.523 fused_ordering(466) 00:10:44.523 fused_ordering(467) 00:10:44.523 fused_ordering(468) 00:10:44.523 fused_ordering(469) 00:10:44.523 fused_ordering(470) 00:10:44.523 fused_ordering(471) 00:10:44.523 fused_ordering(472) 00:10:44.523 fused_ordering(473) 00:10:44.523 fused_ordering(474) 00:10:44.523 fused_ordering(475) 00:10:44.523 fused_ordering(476) 00:10:44.523 fused_ordering(477) 00:10:44.523 fused_ordering(478) 00:10:44.523 fused_ordering(479) 00:10:44.523 fused_ordering(480) 00:10:44.523 fused_ordering(481) 00:10:44.523 fused_ordering(482) 00:10:44.523 fused_ordering(483) 00:10:44.523 fused_ordering(484) 00:10:44.523 fused_ordering(485) 00:10:44.523 fused_ordering(486) 00:10:44.523 fused_ordering(487) 00:10:44.523 fused_ordering(488) 00:10:44.523 fused_ordering(489) 00:10:44.523 fused_ordering(490) 00:10:44.523 fused_ordering(491) 00:10:44.523 fused_ordering(492) 00:10:44.523 fused_ordering(493) 00:10:44.523 fused_ordering(494) 00:10:44.523 fused_ordering(495) 00:10:44.523 fused_ordering(496) 00:10:44.523 fused_ordering(497) 00:10:44.523 fused_ordering(498) 00:10:44.523 fused_ordering(499) 00:10:44.523 fused_ordering(500) 00:10:44.523 fused_ordering(501) 00:10:44.523 fused_ordering(502) 00:10:44.523 fused_ordering(503) 00:10:44.523 fused_ordering(504) 00:10:44.523 fused_ordering(505) 00:10:44.523 fused_ordering(506) 00:10:44.523 fused_ordering(507) 00:10:44.523 fused_ordering(508) 00:10:44.523 fused_ordering(509) 00:10:44.523 fused_ordering(510) 00:10:44.523 fused_ordering(511) 00:10:44.523 fused_ordering(512) 00:10:44.523 fused_ordering(513) 00:10:44.523 fused_ordering(514) 00:10:44.523 fused_ordering(515) 00:10:44.523 fused_ordering(516) 00:10:44.523 fused_ordering(517) 00:10:44.523 fused_ordering(518) 00:10:44.523 fused_ordering(519) 00:10:44.523 fused_ordering(520) 00:10:44.523 fused_ordering(521) 00:10:44.523 fused_ordering(522) 00:10:44.523 fused_ordering(523) 00:10:44.523 fused_ordering(524) 00:10:44.523 fused_ordering(525) 00:10:44.523 fused_ordering(526) 00:10:44.523 fused_ordering(527) 00:10:44.523 fused_ordering(528) 00:10:44.523 fused_ordering(529) 00:10:44.523 fused_ordering(530) 00:10:44.523 fused_ordering(531) 00:10:44.523 fused_ordering(532) 00:10:44.523 fused_ordering(533) 00:10:44.523 fused_ordering(534) 00:10:44.523 fused_ordering(535) 00:10:44.523 fused_ordering(536) 00:10:44.523 fused_ordering(537) 00:10:44.523 fused_ordering(538) 00:10:44.523 fused_ordering(539) 00:10:44.523 fused_ordering(540) 00:10:44.523 fused_ordering(541) 00:10:44.523 fused_ordering(542) 00:10:44.523 fused_ordering(543) 00:10:44.523 fused_ordering(544) 00:10:44.523 fused_ordering(545) 00:10:44.523 fused_ordering(546) 00:10:44.523 fused_ordering(547) 00:10:44.523 fused_ordering(548) 00:10:44.523 fused_ordering(549) 00:10:44.523 fused_ordering(550) 00:10:44.523 fused_ordering(551) 00:10:44.523 fused_ordering(552) 00:10:44.523 fused_ordering(553) 00:10:44.523 fused_ordering(554) 00:10:44.523 fused_ordering(555) 00:10:44.523 fused_ordering(556) 00:10:44.523 fused_ordering(557) 00:10:44.523 fused_ordering(558) 00:10:44.523 fused_ordering(559) 00:10:44.523 fused_ordering(560) 00:10:44.523 fused_ordering(561) 00:10:44.523 fused_ordering(562) 00:10:44.523 fused_ordering(563) 00:10:44.523 fused_ordering(564) 00:10:44.523 fused_ordering(565) 00:10:44.523 fused_ordering(566) 00:10:44.523 fused_ordering(567) 00:10:44.523 fused_ordering(568) 00:10:44.523 fused_ordering(569) 00:10:44.523 fused_ordering(570) 00:10:44.523 fused_ordering(571) 00:10:44.523 fused_ordering(572) 00:10:44.523 fused_ordering(573) 00:10:44.523 fused_ordering(574) 00:10:44.523 fused_ordering(575) 00:10:44.523 fused_ordering(576) 00:10:44.523 fused_ordering(577) 00:10:44.523 fused_ordering(578) 00:10:44.523 fused_ordering(579) 00:10:44.523 fused_ordering(580) 00:10:44.523 fused_ordering(581) 00:10:44.523 fused_ordering(582) 00:10:44.523 fused_ordering(583) 00:10:44.523 fused_ordering(584) 00:10:44.523 fused_ordering(585) 00:10:44.523 fused_ordering(586) 00:10:44.523 fused_ordering(587) 00:10:44.523 fused_ordering(588) 00:10:44.523 fused_ordering(589) 00:10:44.523 fused_ordering(590) 00:10:44.523 fused_ordering(591) 00:10:44.523 fused_ordering(592) 00:10:44.523 fused_ordering(593) 00:10:44.523 fused_ordering(594) 00:10:44.523 fused_ordering(595) 00:10:44.523 fused_ordering(596) 00:10:44.523 fused_ordering(597) 00:10:44.523 fused_ordering(598) 00:10:44.523 fused_ordering(599) 00:10:44.523 fused_ordering(600) 00:10:44.523 fused_ordering(601) 00:10:44.523 fused_ordering(602) 00:10:44.523 fused_ordering(603) 00:10:44.523 fused_ordering(604) 00:10:44.523 fused_ordering(605) 00:10:44.523 fused_ordering(606) 00:10:44.523 fused_ordering(607) 00:10:44.523 fused_ordering(608) 00:10:44.523 fused_ordering(609) 00:10:44.523 fused_ordering(610) 00:10:44.523 fused_ordering(611) 00:10:44.523 fused_ordering(612) 00:10:44.523 fused_ordering(613) 00:10:44.523 fused_ordering(614) 00:10:44.523 fused_ordering(615) 00:10:45.092 fused_ordering(616) 00:10:45.092 fused_ordering(617) 00:10:45.092 fused_ordering(618) 00:10:45.092 fused_ordering(619) 00:10:45.092 fused_ordering(620) 00:10:45.092 fused_ordering(621) 00:10:45.092 fused_ordering(622) 00:10:45.092 fused_ordering(623) 00:10:45.092 fused_ordering(624) 00:10:45.092 fused_ordering(625) 00:10:45.092 fused_ordering(626) 00:10:45.092 fused_ordering(627) 00:10:45.092 fused_ordering(628) 00:10:45.092 fused_ordering(629) 00:10:45.092 fused_ordering(630) 00:10:45.092 fused_ordering(631) 00:10:45.092 fused_ordering(632) 00:10:45.092 fused_ordering(633) 00:10:45.092 fused_ordering(634) 00:10:45.092 fused_ordering(635) 00:10:45.092 fused_ordering(636) 00:10:45.092 fused_ordering(637) 00:10:45.092 fused_ordering(638) 00:10:45.092 fused_ordering(639) 00:10:45.092 fused_ordering(640) 00:10:45.092 fused_ordering(641) 00:10:45.092 fused_ordering(642) 00:10:45.092 fused_ordering(643) 00:10:45.092 fused_ordering(644) 00:10:45.092 fused_ordering(645) 00:10:45.092 fused_ordering(646) 00:10:45.092 fused_ordering(647) 00:10:45.092 fused_ordering(648) 00:10:45.092 fused_ordering(649) 00:10:45.092 fused_ordering(650) 00:10:45.092 fused_ordering(651) 00:10:45.092 fused_ordering(652) 00:10:45.092 fused_ordering(653) 00:10:45.092 fused_ordering(654) 00:10:45.092 fused_ordering(655) 00:10:45.092 fused_ordering(656) 00:10:45.092 fused_ordering(657) 00:10:45.092 fused_ordering(658) 00:10:45.092 fused_ordering(659) 00:10:45.092 fused_ordering(660) 00:10:45.092 fused_ordering(661) 00:10:45.092 fused_ordering(662) 00:10:45.092 fused_ordering(663) 00:10:45.092 fused_ordering(664) 00:10:45.092 fused_ordering(665) 00:10:45.092 fused_ordering(666) 00:10:45.092 fused_ordering(667) 00:10:45.092 fused_ordering(668) 00:10:45.092 fused_ordering(669) 00:10:45.092 fused_ordering(670) 00:10:45.092 fused_ordering(671) 00:10:45.092 fused_ordering(672) 00:10:45.092 fused_ordering(673) 00:10:45.092 fused_ordering(674) 00:10:45.092 fused_ordering(675) 00:10:45.092 fused_ordering(676) 00:10:45.092 fused_ordering(677) 00:10:45.092 fused_ordering(678) 00:10:45.092 fused_ordering(679) 00:10:45.092 fused_ordering(680) 00:10:45.092 fused_ordering(681) 00:10:45.092 fused_ordering(682) 00:10:45.092 fused_ordering(683) 00:10:45.092 fused_ordering(684) 00:10:45.092 fused_ordering(685) 00:10:45.092 fused_ordering(686) 00:10:45.092 fused_ordering(687) 00:10:45.092 fused_ordering(688) 00:10:45.092 fused_ordering(689) 00:10:45.092 fused_ordering(690) 00:10:45.092 fused_ordering(691) 00:10:45.092 fused_ordering(692) 00:10:45.092 fused_ordering(693) 00:10:45.092 fused_ordering(694) 00:10:45.092 fused_ordering(695) 00:10:45.092 fused_ordering(696) 00:10:45.092 fused_ordering(697) 00:10:45.092 fused_ordering(698) 00:10:45.092 fused_ordering(699) 00:10:45.092 fused_ordering(700) 00:10:45.092 fused_ordering(701) 00:10:45.092 fused_ordering(702) 00:10:45.092 fused_ordering(703) 00:10:45.092 fused_ordering(704) 00:10:45.092 fused_ordering(705) 00:10:45.092 fused_ordering(706) 00:10:45.092 fused_ordering(707) 00:10:45.092 fused_ordering(708) 00:10:45.092 fused_ordering(709) 00:10:45.092 fused_ordering(710) 00:10:45.092 fused_ordering(711) 00:10:45.092 fused_ordering(712) 00:10:45.092 fused_ordering(713) 00:10:45.092 fused_ordering(714) 00:10:45.092 fused_ordering(715) 00:10:45.092 fused_ordering(716) 00:10:45.092 fused_ordering(717) 00:10:45.092 fused_ordering(718) 00:10:45.092 fused_ordering(719) 00:10:45.092 fused_ordering(720) 00:10:45.092 fused_ordering(721) 00:10:45.092 fused_ordering(722) 00:10:45.092 fused_ordering(723) 00:10:45.093 fused_ordering(724) 00:10:45.093 fused_ordering(725) 00:10:45.093 fused_ordering(726) 00:10:45.093 fused_ordering(727) 00:10:45.093 fused_ordering(728) 00:10:45.093 fused_ordering(729) 00:10:45.093 fused_ordering(730) 00:10:45.093 fused_ordering(731) 00:10:45.093 fused_ordering(732) 00:10:45.093 fused_ordering(733) 00:10:45.093 fused_ordering(734) 00:10:45.093 fused_ordering(735) 00:10:45.093 fused_ordering(736) 00:10:45.093 fused_ordering(737) 00:10:45.093 fused_ordering(738) 00:10:45.093 fused_ordering(739) 00:10:45.093 fused_ordering(740) 00:10:45.093 fused_ordering(741) 00:10:45.093 fused_ordering(742) 00:10:45.093 fused_ordering(743) 00:10:45.093 fused_ordering(744) 00:10:45.093 fused_ordering(745) 00:10:45.093 fused_ordering(746) 00:10:45.093 fused_ordering(747) 00:10:45.093 fused_ordering(748) 00:10:45.093 fused_ordering(749) 00:10:45.093 fused_ordering(750) 00:10:45.093 fused_ordering(751) 00:10:45.093 fused_ordering(752) 00:10:45.093 fused_ordering(753) 00:10:45.093 fused_ordering(754) 00:10:45.093 fused_ordering(755) 00:10:45.093 fused_ordering(756) 00:10:45.093 fused_ordering(757) 00:10:45.093 fused_ordering(758) 00:10:45.093 fused_ordering(759) 00:10:45.093 fused_ordering(760) 00:10:45.093 fused_ordering(761) 00:10:45.093 fused_ordering(762) 00:10:45.093 fused_ordering(763) 00:10:45.093 fused_ordering(764) 00:10:45.093 fused_ordering(765) 00:10:45.093 fused_ordering(766) 00:10:45.093 fused_ordering(767) 00:10:45.093 fused_ordering(768) 00:10:45.093 fused_ordering(769) 00:10:45.093 fused_ordering(770) 00:10:45.093 fused_ordering(771) 00:10:45.093 fused_ordering(772) 00:10:45.093 fused_ordering(773) 00:10:45.093 fused_ordering(774) 00:10:45.093 fused_ordering(775) 00:10:45.093 fused_ordering(776) 00:10:45.093 fused_ordering(777) 00:10:45.093 fused_ordering(778) 00:10:45.093 fused_ordering(779) 00:10:45.093 fused_ordering(780) 00:10:45.093 fused_ordering(781) 00:10:45.093 fused_ordering(782) 00:10:45.093 fused_ordering(783) 00:10:45.093 fused_ordering(784) 00:10:45.093 fused_ordering(785) 00:10:45.093 fused_ordering(786) 00:10:45.093 fused_ordering(787) 00:10:45.093 fused_ordering(788) 00:10:45.093 fused_ordering(789) 00:10:45.093 fused_ordering(790) 00:10:45.093 fused_ordering(791) 00:10:45.093 fused_ordering(792) 00:10:45.093 fused_ordering(793) 00:10:45.093 fused_ordering(794) 00:10:45.093 fused_ordering(795) 00:10:45.093 fused_ordering(796) 00:10:45.093 fused_ordering(797) 00:10:45.093 fused_ordering(798) 00:10:45.093 fused_ordering(799) 00:10:45.093 fused_ordering(800) 00:10:45.093 fused_ordering(801) 00:10:45.093 fused_ordering(802) 00:10:45.093 fused_ordering(803) 00:10:45.093 fused_ordering(804) 00:10:45.093 fused_ordering(805) 00:10:45.093 fused_ordering(806) 00:10:45.093 fused_ordering(807) 00:10:45.093 fused_ordering(808) 00:10:45.093 fused_ordering(809) 00:10:45.093 fused_ordering(810) 00:10:45.093 fused_ordering(811) 00:10:45.093 fused_ordering(812) 00:10:45.093 fused_ordering(813) 00:10:45.093 fused_ordering(814) 00:10:45.093 fused_ordering(815) 00:10:45.093 fused_ordering(816) 00:10:45.093 fused_ordering(817) 00:10:45.093 fused_ordering(818) 00:10:45.093 fused_ordering(819) 00:10:45.093 fused_ordering(820) 00:10:45.662 fused_ordering(821) 00:10:45.662 fused_ordering(822) 00:10:45.662 fused_ordering(823) 00:10:45.662 fused_ordering(824) 00:10:45.662 fused_ordering(825) 00:10:45.662 fused_ordering(826) 00:10:45.662 fused_ordering(827) 00:10:45.662 fused_ordering(828) 00:10:45.662 fused_ordering(829) 00:10:45.662 fused_ordering(830) 00:10:45.662 fused_ordering(831) 00:10:45.662 fused_ordering(832) 00:10:45.662 fused_ordering(833) 00:10:45.662 fused_ordering(834) 00:10:45.662 fused_ordering(835) 00:10:45.662 fused_ordering(836) 00:10:45.662 fused_ordering(837) 00:10:45.662 fused_ordering(838) 00:10:45.662 fused_ordering(839) 00:10:45.662 fused_ordering(840) 00:10:45.662 fused_ordering(841) 00:10:45.662 fused_ordering(842) 00:10:45.662 fused_ordering(843) 00:10:45.662 fused_ordering(844) 00:10:45.662 fused_ordering(845) 00:10:45.662 fused_ordering(846) 00:10:45.662 fused_ordering(847) 00:10:45.662 fused_ordering(848) 00:10:45.662 fused_ordering(849) 00:10:45.662 fused_ordering(850) 00:10:45.662 fused_ordering(851) 00:10:45.662 fused_ordering(852) 00:10:45.662 fused_ordering(853) 00:10:45.662 fused_ordering(854) 00:10:45.662 fused_ordering(855) 00:10:45.662 fused_ordering(856) 00:10:45.662 fused_ordering(857) 00:10:45.662 fused_ordering(858) 00:10:45.662 fused_ordering(859) 00:10:45.662 fused_ordering(860) 00:10:45.662 fused_ordering(861) 00:10:45.662 fused_ordering(862) 00:10:45.662 fused_ordering(863) 00:10:45.662 fused_ordering(864) 00:10:45.662 fused_ordering(865) 00:10:45.662 fused_ordering(866) 00:10:45.662 fused_ordering(867) 00:10:45.662 fused_ordering(868) 00:10:45.662 fused_ordering(869) 00:10:45.662 fused_ordering(870) 00:10:45.662 fused_ordering(871) 00:10:45.662 fused_ordering(872) 00:10:45.662 fused_ordering(873) 00:10:45.662 fused_ordering(874) 00:10:45.662 fused_ordering(875) 00:10:45.662 fused_ordering(876) 00:10:45.662 fused_ordering(877) 00:10:45.662 fused_ordering(878) 00:10:45.662 fused_ordering(879) 00:10:45.662 fused_ordering(880) 00:10:45.662 fused_ordering(881) 00:10:45.662 fused_ordering(882) 00:10:45.662 fused_ordering(883) 00:10:45.662 fused_ordering(884) 00:10:45.662 fused_ordering(885) 00:10:45.662 fused_ordering(886) 00:10:45.662 fused_ordering(887) 00:10:45.662 fused_ordering(888) 00:10:45.662 fused_ordering(889) 00:10:45.662 fused_ordering(890) 00:10:45.662 fused_ordering(891) 00:10:45.662 fused_ordering(892) 00:10:45.662 fused_ordering(893) 00:10:45.662 fused_ordering(894) 00:10:45.662 fused_ordering(895) 00:10:45.662 fused_ordering(896) 00:10:45.662 fused_ordering(897) 00:10:45.662 fused_ordering(898) 00:10:45.662 fused_ordering(899) 00:10:45.663 fused_ordering(900) 00:10:45.663 fused_ordering(901) 00:10:45.663 fused_ordering(902) 00:10:45.663 fused_ordering(903) 00:10:45.663 fused_ordering(904) 00:10:45.663 fused_ordering(905) 00:10:45.663 fused_ordering(906) 00:10:45.663 fused_ordering(907) 00:10:45.663 fused_ordering(908) 00:10:45.663 fused_ordering(909) 00:10:45.663 fused_ordering(910) 00:10:45.663 fused_ordering(911) 00:10:45.663 fused_ordering(912) 00:10:45.663 fused_ordering(913) 00:10:45.663 fused_ordering(914) 00:10:45.663 fused_ordering(915) 00:10:45.663 fused_ordering(916) 00:10:45.663 fused_ordering(917) 00:10:45.663 fused_ordering(918) 00:10:45.663 fused_ordering(919) 00:10:45.663 fused_ordering(920) 00:10:45.663 fused_ordering(921) 00:10:45.663 fused_ordering(922) 00:10:45.663 fused_ordering(923) 00:10:45.663 fused_ordering(924) 00:10:45.663 fused_ordering(925) 00:10:45.663 fused_ordering(926) 00:10:45.663 fused_ordering(927) 00:10:45.663 fused_ordering(928) 00:10:45.663 fused_ordering(929) 00:10:45.663 fused_ordering(930) 00:10:45.663 fused_ordering(931) 00:10:45.663 fused_ordering(932) 00:10:45.663 fused_ordering(933) 00:10:45.663 fused_ordering(934) 00:10:45.663 fused_ordering(935) 00:10:45.663 fused_ordering(936) 00:10:45.663 fused_ordering(937) 00:10:45.663 fused_ordering(938) 00:10:45.663 fused_ordering(939) 00:10:45.663 fused_ordering(940) 00:10:45.663 fused_ordering(941) 00:10:45.663 fused_ordering(942) 00:10:45.663 fused_ordering(943) 00:10:45.663 fused_ordering(944) 00:10:45.663 fused_ordering(945) 00:10:45.663 fused_ordering(946) 00:10:45.663 fused_ordering(947) 00:10:45.663 fused_ordering(948) 00:10:45.663 fused_ordering(949) 00:10:45.663 fused_ordering(950) 00:10:45.663 fused_ordering(951) 00:10:45.663 fused_ordering(952) 00:10:45.663 fused_ordering(953) 00:10:45.663 fused_ordering(954) 00:10:45.663 fused_ordering(955) 00:10:45.663 fused_ordering(956) 00:10:45.663 fused_ordering(957) 00:10:45.663 fused_ordering(958) 00:10:45.663 fused_ordering(959) 00:10:45.663 fused_ordering(960) 00:10:45.663 fused_ordering(961) 00:10:45.663 fused_ordering(962) 00:10:45.663 fused_ordering(963) 00:10:45.663 fused_ordering(964) 00:10:45.663 fused_ordering(965) 00:10:45.663 fused_ordering(966) 00:10:45.663 fused_ordering(967) 00:10:45.663 fused_ordering(968) 00:10:45.663 fused_ordering(969) 00:10:45.663 fused_ordering(970) 00:10:45.663 fused_ordering(971) 00:10:45.663 fused_ordering(972) 00:10:45.663 fused_ordering(973) 00:10:45.663 fused_ordering(974) 00:10:45.663 fused_ordering(975) 00:10:45.663 fused_ordering(976) 00:10:45.663 fused_ordering(977) 00:10:45.663 fused_ordering(978) 00:10:45.663 fused_ordering(979) 00:10:45.663 fused_ordering(980) 00:10:45.663 fused_ordering(981) 00:10:45.663 fused_ordering(982) 00:10:45.663 fused_ordering(983) 00:10:45.663 fused_ordering(984) 00:10:45.663 fused_ordering(985) 00:10:45.663 fused_ordering(986) 00:10:45.663 fused_ordering(987) 00:10:45.663 fused_ordering(988) 00:10:45.663 fused_ordering(989) 00:10:45.663 fused_ordering(990) 00:10:45.663 fused_ordering(991) 00:10:45.663 fused_ordering(992) 00:10:45.663 fused_ordering(993) 00:10:45.663 fused_ordering(994) 00:10:45.663 fused_ordering(995) 00:10:45.663 fused_ordering(996) 00:10:45.663 fused_ordering(997) 00:10:45.663 fused_ordering(998) 00:10:45.663 fused_ordering(999) 00:10:45.663 fused_ordering(1000) 00:10:45.663 fused_ordering(1001) 00:10:45.663 fused_ordering(1002) 00:10:45.663 fused_ordering(1003) 00:10:45.663 fused_ordering(1004) 00:10:45.663 fused_ordering(1005) 00:10:45.663 fused_ordering(1006) 00:10:45.663 fused_ordering(1007) 00:10:45.663 fused_ordering(1008) 00:10:45.663 fused_ordering(1009) 00:10:45.663 fused_ordering(1010) 00:10:45.663 fused_ordering(1011) 00:10:45.663 fused_ordering(1012) 00:10:45.663 fused_ordering(1013) 00:10:45.663 fused_ordering(1014) 00:10:45.663 fused_ordering(1015) 00:10:45.663 fused_ordering(1016) 00:10:45.663 fused_ordering(1017) 00:10:45.663 fused_ordering(1018) 00:10:45.663 fused_ordering(1019) 00:10:45.663 fused_ordering(1020) 00:10:45.663 fused_ordering(1021) 00:10:45.663 fused_ordering(1022) 00:10:45.663 fused_ordering(1023) 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.923 rmmod nvme_tcp 00:10:45.923 rmmod nvme_fabrics 00:10:45.923 rmmod nvme_keyring 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3489664 ']' 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3489664 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3489664 ']' 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3489664 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489664 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489664' 00:10:45.923 killing process with pid 3489664 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3489664 00:10:45.923 [2024-05-14 23:52:46.392335] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:45.923 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3489664 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.183 23:52:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.091 23:52:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.091 00:10:48.091 real 0m13.310s 00:10:48.091 user 0m7.627s 00:10:48.091 sys 0m7.512s 00:10:48.091 23:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.091 23:52:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:48.091 ************************************ 00:10:48.091 END TEST nvmf_fused_ordering 00:10:48.091 ************************************ 00:10:48.351 23:52:48 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:48.351 23:52:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:48.351 23:52:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.351 23:52:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.351 ************************************ 00:10:48.351 START TEST nvmf_delete_subsystem 00:10:48.351 ************************************ 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:48.351 * Looking for test storage... 00:10:48.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.351 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.352 23:52:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:54.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:54.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:54.926 Found net devices under 0000:af:00.0: cvl_0_0 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:54.926 Found net devices under 0000:af:00.1: cvl_0_1 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.926 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.927 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:55.186 00:10:55.186 --- 10.0.0.2 ping statistics --- 00:10:55.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.186 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:55.186 00:10:55.186 --- 10.0.0.1 ping statistics --- 00:10:55.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.186 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3494176 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3494176 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3494176 ']' 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:55.186 23:52:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.186 [2024-05-14 23:52:55.763253] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:10:55.186 [2024-05-14 23:52:55.763300] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.446 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.446 [2024-05-14 23:52:55.836658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.446 [2024-05-14 23:52:55.904430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.446 [2024-05-14 23:52:55.904469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.446 [2024-05-14 23:52:55.904478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.446 [2024-05-14 23:52:55.904486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.446 [2024-05-14 23:52:55.904493] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.446 [2024-05-14 23:52:55.904592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.446 [2024-05-14 23:52:55.904595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.015 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 [2024-05-14 23:52:56.609094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 [2024-05-14 23:52:56.625077] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:56.274 [2024-05-14 23:52:56.625331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 NULL1 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 Delay0 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3494447 00:10:56.274 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:56.275 23:52:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:56.275 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.275 [2024-05-14 23:52:56.709850] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:58.182 23:52:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.182 23:52:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.182 23:52:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 [2024-05-14 23:52:58.879385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ad980 is same with the state(5) to be set 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 starting I/O failed: -6 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 [2024-05-14 23:52:58.880606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092400bfe0 is same with the state(5) to be set 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 [2024-05-14 23:52:58.881126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0924000c00 is same with the state(5) to be set 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Read completed with error (sct=0, sc=8) 00:10:58.442 Write completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Write completed with error (sct=0, sc=8) 00:10:58.443 Read completed with error (sct=0, sc=8) 00:10:58.443 [2024-05-14 23:52:58.881341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092400c600 is same with the state(5) to be set 00:10:59.381 [2024-05-14 23:52:59.849540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b0420 is same with the state(5) to be set 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 [2024-05-14 23:52:59.881559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092400c2f0 is same with the state(5) to be set 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 [2024-05-14 23:52:59.883342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16afc40 is same with the state(5) to be set 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 [2024-05-14 23:52:59.883482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16afe20 is same with the state(5) to be set 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Read completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 Write completed with error (sct=0, sc=8) 00:10:59.381 [2024-05-14 23:52:59.883913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16adb60 is same with the state(5) to be set 00:10:59.381 Initializing NVMe Controllers 00:10:59.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.381 Controller IO queue size 128, less than required. 00:10:59.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:59.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:59.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:59.381 Initialization complete. Launching workers. 00:10:59.381 ======================================================== 00:10:59.382 Latency(us) 00:10:59.382 Device Information : IOPS MiB/s Average min max 00:10:59.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.39 0.09 962561.17 1285.69 1010972.78 00:10:59.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.06 0.07 908688.46 535.48 1012441.48 00:10:59.382 ======================================================== 00:10:59.382 Total : 323.44 0.16 937900.55 535.48 1012441.48 00:10:59.382 00:10:59.382 [2024-05-14 23:52:59.884455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0420 (9): Bad file descriptor 00:10:59.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:59.382 23:52:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.382 23:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:59.382 23:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3494447 00:10:59.382 23:52:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3494447 00:10:59.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3494447) - No such process 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3494447 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3494447 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3494447 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.950 [2024-05-14 23:53:00.410820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3494996 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:10:59.950 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.950 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.950 [2024-05-14 23:53:00.481546] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:00.528 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:00.528 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:00.528 23:53:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.105 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.105 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:01.105 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.364 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.364 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:01.364 23:53:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.931 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.931 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:01.931 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.499 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.499 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:02.499 23:53:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:03.066 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:03.066 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:03.066 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:03.324 Initializing NVMe Controllers 00:11:03.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.324 Controller IO queue size 128, less than required. 00:11:03.324 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:03.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:03.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:03.324 Initialization complete. Launching workers. 00:11:03.324 ======================================================== 00:11:03.324 Latency(us) 00:11:03.324 Device Information : IOPS MiB/s Average min max 00:11:03.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003873.12 1000299.63 1011874.43 00:11:03.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005141.16 1000432.07 1041099.59 00:11:03.324 ======================================================== 00:11:03.324 Total : 256.00 0.12 1004507.14 1000299.63 1041099.59 00:11:03.324 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494996 00:11:03.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3494996) - No such process 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3494996 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.583 23:53:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.583 rmmod nvme_tcp 00:11:03.583 rmmod nvme_fabrics 00:11:03.583 rmmod nvme_keyring 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3494176 ']' 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3494176 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3494176 ']' 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3494176 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:03.583 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3494176 00:11:03.584 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:03.584 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:03.584 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3494176' 00:11:03.584 killing process with pid 3494176 00:11:03.584 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3494176 00:11:03.584 [2024-05-14 23:53:04.081060] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:03.584 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3494176 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.842 23:53:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.378 23:53:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.378 00:11:06.378 real 0m17.600s 00:11:06.378 user 0m29.931s 00:11:06.378 sys 0m7.109s 00:11:06.378 23:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.378 23:53:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.378 ************************************ 00:11:06.378 END TEST nvmf_delete_subsystem 00:11:06.378 ************************************ 00:11:06.378 23:53:06 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:06.378 23:53:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:06.378 23:53:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.378 23:53:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.378 ************************************ 00:11:06.378 START TEST nvmf_ns_masking 00:11:06.378 ************************************ 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:06.378 * Looking for test storage... 00:11:06.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.378 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=637f31e0-1fdd-46b9-b939-3b70650a387a 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.379 23:53:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:12.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:12.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.951 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:12.952 Found net devices under 0000:af:00.0: cvl_0_0 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:12.952 Found net devices under 0000:af:00.1: cvl_0_1 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.952 23:53:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:11:12.952 00:11:12.952 --- 10.0.0.2 ping statistics --- 00:11:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.952 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:12.952 00:11:12.952 --- 10.0.0.1 ping statistics --- 00:11:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.952 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3499236 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3499236 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3499236 ']' 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:12.952 23:53:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.952 [2024-05-14 23:53:13.242666] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:12.952 [2024-05-14 23:53:13.242714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.952 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.952 [2024-05-14 23:53:13.316980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.952 [2024-05-14 23:53:13.392319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.952 [2024-05-14 23:53:13.392356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.952 [2024-05-14 23:53:13.392369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.952 [2024-05-14 23:53:13.392379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.952 [2024-05-14 23:53:13.392387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.952 [2024-05-14 23:53:13.392430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.952 [2024-05-14 23:53:13.392539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.952 [2024-05-14 23:53:13.392557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.952 [2024-05-14 23:53:13.392558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.521 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.779 [2024-05-14 23:53:14.249593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.779 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:13.779 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:13.779 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:14.039 Malloc1 00:11:14.039 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:14.039 Malloc2 00:11:14.298 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.298 23:53:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:14.557 23:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.817 [2024-05-14 23:53:15.164207] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:14.817 [2024-05-14 23:53:15.164469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 637f31e0-1fdd-46b9-b939-3b70650a387a -a 10.0.0.2 -s 4420 -i 4 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:14.817 23:53:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.354 [ 0]:0x1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=266d6e83054b443b9cdfe47cea124259 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 266d6e83054b443b9cdfe47cea124259 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:17.354 [ 0]:0x1 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=266d6e83054b443b9cdfe47cea124259 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 266d6e83054b443b9cdfe47cea124259 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:17.354 [ 1]:0x2 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:17.354 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.614 23:53:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.614 23:53:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:17.873 23:53:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:17.873 23:53:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 637f31e0-1fdd-46b9-b939-3b70650a387a -a 10.0.0.2 -s 4420 -i 4 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:18.134 23:53:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.056 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.324 [ 0]:0x2 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.324 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:20.325 [ 0]:0x1 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.325 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=266d6e83054b443b9cdfe47cea124259 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 266d6e83054b443b9cdfe47cea124259 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.584 [ 1]:0x2 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.584 23:53:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.584 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:20.584 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.584 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.843 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:20.844 [ 0]:0x2 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.844 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.103 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:21.103 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 637f31e0-1fdd-46b9-b939-3b70650a387a -a 10.0.0.2 -s 4420 -i 4 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:21.363 23:53:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:23.269 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:23.528 [ 0]:0x1 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=266d6e83054b443b9cdfe47cea124259 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 266d6e83054b443b9cdfe47cea124259 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.528 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:23.529 [ 1]:0x2 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.529 23:53:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:23.788 [ 0]:0x2 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:23.788 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.048 [2024-05-14 23:53:24.418228] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:24.048 request: 00:11:24.048 { 00:11:24.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.048 "nsid": 2, 00:11:24.048 "host": "nqn.2016-06.io.spdk:host1", 00:11:24.048 "method": "nvmf_ns_remove_host", 00:11:24.048 "req_id": 1 00:11:24.048 } 00:11:24.048 Got JSON-RPC error response 00:11:24.048 response: 00:11:24.048 { 00:11:24.048 "code": -32602, 00:11:24.048 "message": "Invalid parameters" 00:11:24.048 } 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:24.048 [ 0]:0x2 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56ca91c78147497faad9ebf8db23720b 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56ca91c78147497faad9ebf8db23720b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:24.048 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.307 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.567 23:53:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.567 rmmod nvme_tcp 00:11:24.567 rmmod nvme_fabrics 00:11:24.567 rmmod nvme_keyring 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3499236 ']' 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3499236 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3499236 ']' 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3499236 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3499236 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3499236' 00:11:24.567 killing process with pid 3499236 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3499236 00:11:24.567 [2024-05-14 23:53:25.103352] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:24.567 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3499236 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.827 23:53:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.365 23:53:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.365 00:11:27.365 real 0m20.960s 00:11:27.365 user 0m51.426s 00:11:27.365 sys 0m7.448s 00:11:27.365 23:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.365 23:53:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 ************************************ 00:11:27.365 END TEST nvmf_ns_masking 00:11:27.365 ************************************ 00:11:27.365 23:53:27 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:27.365 23:53:27 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.365 23:53:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:27.365 23:53:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.365 23:53:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 ************************************ 00:11:27.365 START TEST nvmf_nvme_cli 00:11:27.366 ************************************ 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.366 * Looking for test storage... 00:11:27.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.366 23:53:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.934 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.934 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.935 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.935 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.935 23:53:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:33.935 00:11:33.935 --- 10.0.0.2 ping statistics --- 00:11:33.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.935 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:11:33.935 00:11:33.935 --- 10.0.0.1 ping statistics --- 00:11:33.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.935 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3505205 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3505205 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3505205 ']' 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:33.935 23:53:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.935 [2024-05-14 23:53:34.294076] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:33.935 [2024-05-14 23:53:34.294124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.935 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.935 [2024-05-14 23:53:34.368336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.935 [2024-05-14 23:53:34.436556] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.935 [2024-05-14 23:53:34.436596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.935 [2024-05-14 23:53:34.436605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.935 [2024-05-14 23:53:34.436613] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.935 [2024-05-14 23:53:34.436637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.935 [2024-05-14 23:53:34.436689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.935 [2024-05-14 23:53:34.436781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.935 [2024-05-14 23:53:34.436869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.935 [2024-05-14 23:53:34.436871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.504 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:34.504 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:34.504 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.504 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.504 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 [2024-05-14 23:53:35.142048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 Malloc0 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 Malloc1 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 [2024-05-14 23:53:35.225868] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:34.763 [2024-05-14 23:53:35.226152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.763 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:35.023 00:11:35.023 Discovery Log Number of Records 2, Generation counter 2 00:11:35.023 =====Discovery Log Entry 0====== 00:11:35.023 trtype: tcp 00:11:35.023 adrfam: ipv4 00:11:35.023 subtype: current discovery subsystem 00:11:35.023 treq: not required 00:11:35.023 portid: 0 00:11:35.023 trsvcid: 4420 00:11:35.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.023 traddr: 10.0.0.2 00:11:35.023 eflags: explicit discovery connections, duplicate discovery information 00:11:35.023 sectype: none 00:11:35.023 =====Discovery Log Entry 1====== 00:11:35.023 trtype: tcp 00:11:35.023 adrfam: ipv4 00:11:35.023 subtype: nvme subsystem 00:11:35.023 treq: not required 00:11:35.023 portid: 0 00:11:35.023 trsvcid: 4420 00:11:35.023 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.023 traddr: 10.0.0.2 00:11:35.023 eflags: none 00:11:35.023 sectype: none 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:35.023 23:53:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:36.402 23:53:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.308 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:38.309 /dev/nvme0n1 ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:38.309 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.568 rmmod nvme_tcp 00:11:38.568 rmmod nvme_fabrics 00:11:38.568 rmmod nvme_keyring 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3505205 ']' 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3505205 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3505205 ']' 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3505205 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:38.568 23:53:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3505205 00:11:38.568 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:38.568 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:38.568 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3505205' 00:11:38.568 killing process with pid 3505205 00:11:38.568 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3505205 00:11:38.568 [2024-05-14 23:53:39.021700] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:38.568 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3505205 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.828 23:53:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.803 23:53:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:40.803 00:11:40.803 real 0m13.835s 00:11:40.803 user 0m20.765s 00:11:40.803 sys 0m5.878s 00:11:40.803 23:53:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.803 23:53:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:40.803 ************************************ 00:11:40.803 END TEST nvmf_nvme_cli 00:11:40.803 ************************************ 00:11:40.803 23:53:41 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:40.803 23:53:41 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:40.803 23:53:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:40.803 23:53:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:40.803 23:53:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.063 ************************************ 00:11:41.063 START TEST nvmf_vfio_user 00:11:41.063 ************************************ 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:41.063 * Looking for test storage... 00:11:41.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3506458 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3506458' 00:11:41.063 Process pid: 3506458 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3506458 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3506458 ']' 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.063 23:53:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:41.063 [2024-05-14 23:53:41.627864] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:41.063 [2024-05-14 23:53:41.627917] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.323 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.323 [2024-05-14 23:53:41.697296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.323 [2024-05-14 23:53:41.771334] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.323 [2024-05-14 23:53:41.771370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.323 [2024-05-14 23:53:41.771379] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.323 [2024-05-14 23:53:41.771388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.323 [2024-05-14 23:53:41.771395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.323 [2024-05-14 23:53:41.771434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.323 [2024-05-14 23:53:41.771529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.323 [2024-05-14 23:53:41.771613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.323 [2024-05-14 23:53:41.771615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.892 23:53:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:41.892 23:53:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:11:41.892 23:53:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:43.272 Malloc1 00:11:43.272 23:53:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:43.531 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:43.790 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:43.790 [2024-05-14 23:53:44.344656] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.790 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.790 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:43.790 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.050 Malloc2 00:11:44.050 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:44.310 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:44.569 23:53:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:44.569 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:44.569 [2024-05-14 23:53:45.138895] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:11:44.569 [2024-05-14 23:53:45.138938] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507218 ] 00:11:44.569 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.831 [2024-05-14 23:53:45.168530] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:44.831 [2024-05-14 23:53:45.178569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:44.831 [2024-05-14 23:53:45.178590] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa366d50000 00:11:44.831 [2024-05-14 23:53:45.179576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:44.831 [2024-05-14 23:53:45.180574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:44.831 [2024-05-14 23:53:45.181580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:44.831 [2024-05-14 23:53:45.182586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:44.831 [2024-05-14 23:53:45.183589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:44.832 [2024-05-14 23:53:45.184594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:44.832 [2024-05-14 23:53:45.185600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:44.832 [2024-05-14 23:53:45.186610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:44.832 [2024-05-14 23:53:45.187616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:44.832 [2024-05-14 23:53:45.187630] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa366d45000 00:11:44.832 [2024-05-14 23:53:45.188524] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:44.832 [2024-05-14 23:53:45.201276] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:44.832 [2024-05-14 23:53:45.201304] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:44.832 [2024-05-14 23:53:45.203705] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:44.832 [2024-05-14 23:53:45.203743] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:44.832 [2024-05-14 23:53:45.203821] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:44.832 [2024-05-14 23:53:45.203836] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:44.832 [2024-05-14 23:53:45.203842] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:44.832 [2024-05-14 23:53:45.208198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:44.832 [2024-05-14 23:53:45.208209] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:44.832 [2024-05-14 23:53:45.208217] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:44.832 [2024-05-14 23:53:45.208730] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:44.832 [2024-05-14 23:53:45.208739] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:44.832 [2024-05-14 23:53:45.208748] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.209733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:44.832 [2024-05-14 23:53:45.209743] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.210739] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:44.832 [2024-05-14 23:53:45.210748] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:44.832 [2024-05-14 23:53:45.210754] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.210762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.210869] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:44.832 [2024-05-14 23:53:45.210875] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.210882] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:44.832 [2024-05-14 23:53:45.211747] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:44.832 [2024-05-14 23:53:45.212752] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:44.832 [2024-05-14 23:53:45.213757] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:44.832 [2024-05-14 23:53:45.214757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:44.832 [2024-05-14 23:53:45.214826] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:44.832 [2024-05-14 23:53:45.215777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:44.832 [2024-05-14 23:53:45.215786] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:44.832 [2024-05-14 23:53:45.215792] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.215811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:44.832 [2024-05-14 23:53:45.215820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.215835] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:44.832 [2024-05-14 23:53:45.215842] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:44.832 [2024-05-14 23:53:45.215855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:44.832 [2024-05-14 23:53:45.215897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:44.832 [2024-05-14 23:53:45.215908] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:44.832 [2024-05-14 23:53:45.215914] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:44.832 [2024-05-14 23:53:45.215919] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:44.832 [2024-05-14 23:53:45.215926] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:44.832 [2024-05-14 23:53:45.215932] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:44.832 [2024-05-14 23:53:45.215938] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:44.832 [2024-05-14 23:53:45.215944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.215956] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.215969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:44.832 [2024-05-14 23:53:45.215984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:44.832 [2024-05-14 23:53:45.215995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.832 [2024-05-14 23:53:45.216004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.832 [2024-05-14 23:53:45.216012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.832 [2024-05-14 23:53:45.216021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.832 [2024-05-14 23:53:45.216029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:44.832 [2024-05-14 23:53:45.216059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:44.832 [2024-05-14 23:53:45.216066] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:44.832 [2024-05-14 23:53:45.216072] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216089] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:44.832 [2024-05-14 23:53:45.216108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:44.832 [2024-05-14 23:53:45.216150] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:44.832 [2024-05-14 23:53:45.216159] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216167] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:44.833 [2024-05-14 23:53:45.216173] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:44.833 [2024-05-14 23:53:45.216179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216210] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:44.833 [2024-05-14 23:53:45.216224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216240] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:44.833 [2024-05-14 23:53:45.216246] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:44.833 [2024-05-14 23:53:45.216253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216290] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216299] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:44.833 [2024-05-14 23:53:45.216305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:44.833 [2024-05-14 23:53:45.216312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216352] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216359] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216365] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216372] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:44.833 [2024-05-14 23:53:45.216377] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:44.833 [2024-05-14 23:53:45.216384] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:44.833 [2024-05-14 23:53:45.216404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216494] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:44.833 [2024-05-14 23:53:45.216500] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:44.833 [2024-05-14 23:53:45.216504] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:44.833 [2024-05-14 23:53:45.216509] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:44.833 [2024-05-14 23:53:45.216515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:44.833 [2024-05-14 23:53:45.216523] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:44.833 [2024-05-14 23:53:45.216530] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:44.833 [2024-05-14 23:53:45.216537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216545] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:44.833 [2024-05-14 23:53:45.216551] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:44.833 [2024-05-14 23:53:45.216557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216568] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:44.833 [2024-05-14 23:53:45.216574] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:44.833 [2024-05-14 23:53:45.216581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:44.833 [2024-05-14 23:53:45.216588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:44.833 [2024-05-14 23:53:45.216626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:44.833 ===================================================== 00:11:44.833 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:44.833 ===================================================== 00:11:44.833 Controller Capabilities/Features 00:11:44.833 ================================ 00:11:44.833 Vendor ID: 4e58 00:11:44.833 Subsystem Vendor ID: 4e58 00:11:44.833 Serial Number: SPDK1 00:11:44.833 Model Number: SPDK bdev Controller 00:11:44.833 Firmware Version: 24.05 00:11:44.833 Recommended Arb Burst: 6 00:11:44.833 IEEE OUI Identifier: 8d 6b 50 00:11:44.833 Multi-path I/O 00:11:44.833 May have multiple subsystem ports: Yes 00:11:44.833 May have multiple controllers: Yes 00:11:44.833 Associated with SR-IOV VF: No 00:11:44.833 Max Data Transfer Size: 131072 00:11:44.833 Max Number of Namespaces: 32 00:11:44.833 Max Number of I/O Queues: 127 00:11:44.833 NVMe Specification Version (VS): 1.3 00:11:44.833 NVMe Specification Version (Identify): 1.3 00:11:44.833 Maximum Queue Entries: 256 00:11:44.833 Contiguous Queues Required: Yes 00:11:44.833 Arbitration Mechanisms Supported 00:11:44.833 Weighted Round Robin: Not Supported 00:11:44.833 Vendor Specific: Not Supported 00:11:44.833 Reset Timeout: 15000 ms 00:11:44.833 Doorbell Stride: 4 bytes 00:11:44.833 NVM Subsystem Reset: Not Supported 00:11:44.833 Command Sets Supported 00:11:44.833 NVM Command Set: Supported 00:11:44.833 Boot Partition: Not Supported 00:11:44.833 Memory Page Size Minimum: 4096 bytes 00:11:44.833 Memory Page Size Maximum: 4096 bytes 00:11:44.833 Persistent Memory Region: Not Supported 00:11:44.833 Optional Asynchronous Events Supported 00:11:44.833 Namespace Attribute Notices: Supported 00:11:44.833 Firmware Activation Notices: Not Supported 00:11:44.833 ANA Change Notices: Not Supported 00:11:44.833 PLE Aggregate Log Change Notices: Not Supported 00:11:44.833 LBA Status Info Alert Notices: Not Supported 00:11:44.833 EGE Aggregate Log Change Notices: Not Supported 00:11:44.833 Normal NVM Subsystem Shutdown event: Not Supported 00:11:44.833 Zone Descriptor Change Notices: Not Supported 00:11:44.833 Discovery Log Change Notices: Not Supported 00:11:44.833 Controller Attributes 00:11:44.833 128-bit Host Identifier: Supported 00:11:44.833 Non-Operational Permissive Mode: Not Supported 00:11:44.833 NVM Sets: Not Supported 00:11:44.833 Read Recovery Levels: Not Supported 00:11:44.833 Endurance Groups: Not Supported 00:11:44.833 Predictable Latency Mode: Not Supported 00:11:44.833 Traffic Based Keep ALive: Not Supported 00:11:44.833 Namespace Granularity: Not Supported 00:11:44.833 SQ Associations: Not Supported 00:11:44.833 UUID List: Not Supported 00:11:44.833 Multi-Domain Subsystem: Not Supported 00:11:44.833 Fixed Capacity Management: Not Supported 00:11:44.833 Variable Capacity Management: Not Supported 00:11:44.833 Delete Endurance Group: Not Supported 00:11:44.833 Delete NVM Set: Not Supported 00:11:44.833 Extended LBA Formats Supported: Not Supported 00:11:44.833 Flexible Data Placement Supported: Not Supported 00:11:44.833 00:11:44.833 Controller Memory Buffer Support 00:11:44.833 ================================ 00:11:44.833 Supported: No 00:11:44.833 00:11:44.833 Persistent Memory Region Support 00:11:44.833 ================================ 00:11:44.833 Supported: No 00:11:44.833 00:11:44.833 Admin Command Set Attributes 00:11:44.833 ============================ 00:11:44.834 Security Send/Receive: Not Supported 00:11:44.834 Format NVM: Not Supported 00:11:44.834 Firmware Activate/Download: Not Supported 00:11:44.834 Namespace Management: Not Supported 00:11:44.834 Device Self-Test: Not Supported 00:11:44.834 Directives: Not Supported 00:11:44.834 NVMe-MI: Not Supported 00:11:44.834 Virtualization Management: Not Supported 00:11:44.834 Doorbell Buffer Config: Not Supported 00:11:44.834 Get LBA Status Capability: Not Supported 00:11:44.834 Command & Feature Lockdown Capability: Not Supported 00:11:44.834 Abort Command Limit: 4 00:11:44.834 Async Event Request Limit: 4 00:11:44.834 Number of Firmware Slots: N/A 00:11:44.834 Firmware Slot 1 Read-Only: N/A 00:11:44.834 Firmware Activation Without Reset: N/A 00:11:44.834 Multiple Update Detection Support: N/A 00:11:44.834 Firmware Update Granularity: No Information Provided 00:11:44.834 Per-Namespace SMART Log: No 00:11:44.834 Asymmetric Namespace Access Log Page: Not Supported 00:11:44.834 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:44.834 Command Effects Log Page: Supported 00:11:44.834 Get Log Page Extended Data: Supported 00:11:44.834 Telemetry Log Pages: Not Supported 00:11:44.834 Persistent Event Log Pages: Not Supported 00:11:44.834 Supported Log Pages Log Page: May Support 00:11:44.834 Commands Supported & Effects Log Page: Not Supported 00:11:44.834 Feature Identifiers & Effects Log Page:May Support 00:11:44.834 NVMe-MI Commands & Effects Log Page: May Support 00:11:44.834 Data Area 4 for Telemetry Log: Not Supported 00:11:44.834 Error Log Page Entries Supported: 128 00:11:44.834 Keep Alive: Supported 00:11:44.834 Keep Alive Granularity: 10000 ms 00:11:44.834 00:11:44.834 NVM Command Set Attributes 00:11:44.834 ========================== 00:11:44.834 Submission Queue Entry Size 00:11:44.834 Max: 64 00:11:44.834 Min: 64 00:11:44.834 Completion Queue Entry Size 00:11:44.834 Max: 16 00:11:44.834 Min: 16 00:11:44.834 Number of Namespaces: 32 00:11:44.834 Compare Command: Supported 00:11:44.834 Write Uncorrectable Command: Not Supported 00:11:44.834 Dataset Management Command: Supported 00:11:44.834 Write Zeroes Command: Supported 00:11:44.834 Set Features Save Field: Not Supported 00:11:44.834 Reservations: Not Supported 00:11:44.834 Timestamp: Not Supported 00:11:44.834 Copy: Supported 00:11:44.834 Volatile Write Cache: Present 00:11:44.834 Atomic Write Unit (Normal): 1 00:11:44.834 Atomic Write Unit (PFail): 1 00:11:44.834 Atomic Compare & Write Unit: 1 00:11:44.834 Fused Compare & Write: Supported 00:11:44.834 Scatter-Gather List 00:11:44.834 SGL Command Set: Supported (Dword aligned) 00:11:44.834 SGL Keyed: Not Supported 00:11:44.834 SGL Bit Bucket Descriptor: Not Supported 00:11:44.834 SGL Metadata Pointer: Not Supported 00:11:44.834 Oversized SGL: Not Supported 00:11:44.834 SGL Metadata Address: Not Supported 00:11:44.834 SGL Offset: Not Supported 00:11:44.834 Transport SGL Data Block: Not Supported 00:11:44.834 Replay Protected Memory Block: Not Supported 00:11:44.834 00:11:44.834 Firmware Slot Information 00:11:44.834 ========================= 00:11:44.834 Active slot: 1 00:11:44.834 Slot 1 Firmware Revision: 24.05 00:11:44.834 00:11:44.834 00:11:44.834 Commands Supported and Effects 00:11:44.834 ============================== 00:11:44.834 Admin Commands 00:11:44.834 -------------- 00:11:44.834 Get Log Page (02h): Supported 00:11:44.834 Identify (06h): Supported 00:11:44.834 Abort (08h): Supported 00:11:44.834 Set Features (09h): Supported 00:11:44.834 Get Features (0Ah): Supported 00:11:44.834 Asynchronous Event Request (0Ch): Supported 00:11:44.834 Keep Alive (18h): Supported 00:11:44.834 I/O Commands 00:11:44.834 ------------ 00:11:44.834 Flush (00h): Supported LBA-Change 00:11:44.834 Write (01h): Supported LBA-Change 00:11:44.834 Read (02h): Supported 00:11:44.834 Compare (05h): Supported 00:11:44.834 Write Zeroes (08h): Supported LBA-Change 00:11:44.834 Dataset Management (09h): Supported LBA-Change 00:11:44.834 Copy (19h): Supported LBA-Change 00:11:44.834 Unknown (79h): Supported LBA-Change 00:11:44.834 Unknown (7Ah): Supported 00:11:44.834 00:11:44.834 Error Log 00:11:44.834 ========= 00:11:44.834 00:11:44.834 Arbitration 00:11:44.834 =========== 00:11:44.834 Arbitration Burst: 1 00:11:44.834 00:11:44.834 Power Management 00:11:44.834 ================ 00:11:44.834 Number of Power States: 1 00:11:44.834 Current Power State: Power State #0 00:11:44.834 Power State #0: 00:11:44.834 Max Power: 0.00 W 00:11:44.834 Non-Operational State: Operational 00:11:44.834 Entry Latency: Not Reported 00:11:44.834 Exit Latency: Not Reported 00:11:44.834 Relative Read Throughput: 0 00:11:44.834 Relative Read Latency: 0 00:11:44.834 Relative Write Throughput: 0 00:11:44.834 Relative Write Latency: 0 00:11:44.834 Idle Power: Not Reported 00:11:44.834 Active Power: Not Reported 00:11:44.834 Non-Operational Permissive Mode: Not Supported 00:11:44.834 00:11:44.834 Health Information 00:11:44.834 ================== 00:11:44.834 Critical Warnings: 00:11:44.834 Available Spare Space: OK 00:11:44.834 Temperature: OK 00:11:44.834 Device Reliability: OK 00:11:44.834 Read Only: No 00:11:44.834 Volatile Memory Backup: OK 00:11:44.834 Current Temperature: 0 Kelvin (-2[2024-05-14 23:53:45.216717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:44.834 [2024-05-14 23:53:45.216731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:44.834 [2024-05-14 23:53:45.216757] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:44.834 [2024-05-14 23:53:45.216768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.834 [2024-05-14 23:53:45.216775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.834 [2024-05-14 23:53:45.216783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.834 [2024-05-14 23:53:45.216791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.834 [2024-05-14 23:53:45.217785] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:44.834 [2024-05-14 23:53:45.217797] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:44.834 [2024-05-14 23:53:45.218782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:44.834 [2024-05-14 23:53:45.218829] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:44.834 [2024-05-14 23:53:45.218837] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:44.834 [2024-05-14 23:53:45.219793] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:44.834 [2024-05-14 23:53:45.219805] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:44.834 [2024-05-14 23:53:45.219855] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:44.834 [2024-05-14 23:53:45.220821] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:44.834 73 Celsius) 00:11:44.834 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:44.834 Available Spare: 0% 00:11:44.834 Available Spare Threshold: 0% 00:11:44.834 Life Percentage Used: 0% 00:11:44.834 Data Units Read: 0 00:11:44.834 Data Units Written: 0 00:11:44.834 Host Read Commands: 0 00:11:44.834 Host Write Commands: 0 00:11:44.834 Controller Busy Time: 0 minutes 00:11:44.834 Power Cycles: 0 00:11:44.834 Power On Hours: 0 hours 00:11:44.834 Unsafe Shutdowns: 0 00:11:44.834 Unrecoverable Media Errors: 0 00:11:44.834 Lifetime Error Log Entries: 0 00:11:44.834 Warning Temperature Time: 0 minutes 00:11:44.834 Critical Temperature Time: 0 minutes 00:11:44.834 00:11:44.834 Number of Queues 00:11:44.834 ================ 00:11:44.834 Number of I/O Submission Queues: 127 00:11:44.834 Number of I/O Completion Queues: 127 00:11:44.834 00:11:44.834 Active Namespaces 00:11:44.834 ================= 00:11:44.834 Namespace ID:1 00:11:44.834 Error Recovery Timeout: Unlimited 00:11:44.834 Command Set Identifier: NVM (00h) 00:11:44.834 Deallocate: Supported 00:11:44.835 Deallocated/Unwritten Error: Not Supported 00:11:44.835 Deallocated Read Value: Unknown 00:11:44.835 Deallocate in Write Zeroes: Not Supported 00:11:44.835 Deallocated Guard Field: 0xFFFF 00:11:44.835 Flush: Supported 00:11:44.835 Reservation: Supported 00:11:44.835 Namespace Sharing Capabilities: Multiple Controllers 00:11:44.835 Size (in LBAs): 131072 (0GiB) 00:11:44.835 Capacity (in LBAs): 131072 (0GiB) 00:11:44.835 Utilization (in LBAs): 131072 (0GiB) 00:11:44.835 NGUID: 90665097C32C43378C85333A355BB910 00:11:44.835 UUID: 90665097-c32c-4337-8c85-333a355bb910 00:11:44.835 Thin Provisioning: Not Supported 00:11:44.835 Per-NS Atomic Units: Yes 00:11:44.835 Atomic Boundary Size (Normal): 0 00:11:44.835 Atomic Boundary Size (PFail): 0 00:11:44.835 Atomic Boundary Offset: 0 00:11:44.835 Maximum Single Source Range Length: 65535 00:11:44.835 Maximum Copy Length: 65535 00:11:44.835 Maximum Source Range Count: 1 00:11:44.835 NGUID/EUI64 Never Reused: No 00:11:44.835 Namespace Write Protected: No 00:11:44.835 Number of LBA Formats: 1 00:11:44.835 Current LBA Format: LBA Format #00 00:11:44.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.835 00:11:44.835 23:53:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:44.835 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.094 [2024-05-14 23:53:45.423902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:50.371 Initializing NVMe Controllers 00:11:50.371 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:50.371 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:50.371 Initialization complete. Launching workers. 00:11:50.371 ======================================================== 00:11:50.371 Latency(us) 00:11:50.371 Device Information : IOPS MiB/s Average min max 00:11:50.371 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39944.69 156.03 3204.04 916.34 6707.44 00:11:50.371 ======================================================== 00:11:50.371 Total : 39944.69 156.03 3204.04 916.34 6707.44 00:11:50.371 00:11:50.371 [2024-05-14 23:53:50.441568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:50.371 23:53:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:50.371 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.371 [2024-05-14 23:53:50.655574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.646 Initializing NVMe Controllers 00:11:55.646 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:55.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:55.646 Initialization complete. Launching workers. 00:11:55.646 ======================================================== 00:11:55.646 Latency(us) 00:11:55.646 Device Information : IOPS MiB/s Average min max 00:11:55.646 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.02 62.71 7978.45 6982.65 8046.28 00:11:55.646 ======================================================== 00:11:55.646 Total : 16054.02 62.71 7978.45 6982.65 8046.28 00:11:55.646 00:11:55.646 [2024-05-14 23:53:55.696786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.646 23:53:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:55.646 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.646 [2024-05-14 23:53:55.917789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:00.924 [2024-05-14 23:54:00.995535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:00.924 Initializing NVMe Controllers 00:12:00.924 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:00.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:00.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:00.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:00.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:00.925 Initialization complete. Launching workers. 00:12:00.925 Starting thread on core 2 00:12:00.925 Starting thread on core 3 00:12:00.925 Starting thread on core 1 00:12:00.925 23:54:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:00.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.925 [2024-05-14 23:54:01.298586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.224 [2024-05-14 23:54:04.702421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.224 Initializing NVMe Controllers 00:12:04.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:04.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:04.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:04.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:04.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:04.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:04.224 Initialization complete. Launching workers. 00:12:04.224 Starting thread on core 1 with urgent priority queue 00:12:04.224 Starting thread on core 2 with urgent priority queue 00:12:04.224 Starting thread on core 3 with urgent priority queue 00:12:04.224 Starting thread on core 0 with urgent priority queue 00:12:04.224 SPDK bdev Controller (SPDK1 ) core 0: 6798.33 IO/s 14.71 secs/100000 ios 00:12:04.224 SPDK bdev Controller (SPDK1 ) core 1: 5417.00 IO/s 18.46 secs/100000 ios 00:12:04.224 SPDK bdev Controller (SPDK1 ) core 2: 5413.33 IO/s 18.47 secs/100000 ios 00:12:04.224 SPDK bdev Controller (SPDK1 ) core 3: 6183.67 IO/s 16.17 secs/100000 ios 00:12:04.224 ======================================================== 00:12:04.224 00:12:04.224 23:54:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:04.224 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.483 [2024-05-14 23:54:04.993659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.483 Initializing NVMe Controllers 00:12:04.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.483 Namespace ID: 1 size: 0GB 00:12:04.483 Initialization complete. 00:12:04.483 INFO: using host memory buffer for IO 00:12:04.483 Hello world! 00:12:04.483 [2024-05-14 23:54:05.029990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.483 23:54:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:04.742 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.742 [2024-05-14 23:54:05.313604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.122 Initializing NVMe Controllers 00:12:06.122 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.122 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.122 Initialization complete. Launching workers. 00:12:06.122 submit (in ns) avg, min, max = 6653.7, 3104.0, 3999699.2 00:12:06.122 complete (in ns) avg, min, max = 18734.9, 1720.8, 5991899.2 00:12:06.122 00:12:06.122 Submit histogram 00:12:06.122 ================ 00:12:06.122 Range in us Cumulative Count 00:12:06.122 3.098 - 3.110: 0.0296% ( 5) 00:12:06.122 3.110 - 3.123: 0.1363% ( 18) 00:12:06.122 3.123 - 3.136: 0.2134% ( 13) 00:12:06.122 3.136 - 3.149: 0.4208% ( 35) 00:12:06.122 3.149 - 3.162: 0.9483% ( 89) 00:12:06.122 3.162 - 3.174: 2.1693% ( 206) 00:12:06.122 3.174 - 3.187: 3.9296% ( 297) 00:12:06.122 3.187 - 3.200: 7.1598% ( 545) 00:12:06.122 3.200 - 3.213: 11.5517% ( 741) 00:12:06.122 3.213 - 3.226: 16.7556% ( 878) 00:12:06.122 3.226 - 3.238: 23.0797% ( 1067) 00:12:06.122 3.238 - 3.251: 28.7162% ( 951) 00:12:06.122 3.251 - 3.264: 35.2537% ( 1103) 00:12:06.122 3.264 - 3.277: 41.6430% ( 1078) 00:12:06.122 3.277 - 3.302: 55.4410% ( 2328) 00:12:06.122 3.302 - 3.328: 62.9801% ( 1272) 00:12:06.122 3.328 - 3.354: 68.5633% ( 942) 00:12:06.122 3.354 - 3.379: 72.6885% ( 696) 00:12:06.122 3.379 - 3.405: 76.4047% ( 627) 00:12:06.122 3.405 - 3.430: 84.1987% ( 1315) 00:12:06.122 3.430 - 3.456: 87.4289% ( 545) 00:12:06.122 3.456 - 3.482: 88.4661% ( 175) 00:12:06.122 3.482 - 3.507: 89.0825% ( 104) 00:12:06.122 3.507 - 3.533: 90.0605% ( 165) 00:12:06.122 3.533 - 3.558: 91.5659% ( 254) 00:12:06.122 3.558 - 3.584: 93.4270% ( 314) 00:12:06.122 3.584 - 3.610: 95.0391% ( 272) 00:12:06.122 3.610 - 3.635: 96.1415% ( 186) 00:12:06.122 3.635 - 3.661: 97.1195% ( 165) 00:12:06.122 3.661 - 3.686: 98.3108% ( 201) 00:12:06.122 3.686 - 3.712: 98.8739% ( 95) 00:12:06.122 3.712 - 3.738: 99.2354% ( 61) 00:12:06.122 3.738 - 3.763: 99.4192% ( 31) 00:12:06.122 3.763 - 3.789: 99.5614% ( 24) 00:12:06.122 3.789 - 3.814: 99.6147% ( 9) 00:12:06.122 3.814 - 3.840: 99.6325% ( 3) 00:12:06.122 5.965 - 5.990: 99.6385% ( 1) 00:12:06.122 6.016 - 6.042: 99.6444% ( 1) 00:12:06.122 6.042 - 6.067: 99.6503% ( 1) 00:12:06.122 6.093 - 6.118: 99.6562% ( 1) 00:12:06.122 6.118 - 6.144: 99.6622% ( 1) 00:12:06.122 6.144 - 6.170: 99.6740% ( 2) 00:12:06.122 6.170 - 6.195: 99.6859% ( 2) 00:12:06.122 6.195 - 6.221: 99.6977% ( 2) 00:12:06.122 6.272 - 6.298: 99.7037% ( 1) 00:12:06.122 6.374 - 6.400: 99.7096% ( 1) 00:12:06.122 6.400 - 6.426: 99.7155% ( 1) 00:12:06.122 6.426 - 6.451: 99.7214% ( 1) 00:12:06.122 6.451 - 6.477: 99.7274% ( 1) 00:12:06.122 6.528 - 6.554: 99.7333% ( 1) 00:12:06.122 6.554 - 6.605: 99.7451% ( 2) 00:12:06.122 6.605 - 6.656: 99.7570% ( 2) 00:12:06.122 6.656 - 6.707: 99.7629% ( 1) 00:12:06.122 6.707 - 6.758: 99.7866% ( 4) 00:12:06.122 6.758 - 6.810: 99.7926% ( 1) 00:12:06.122 6.861 - 6.912: 99.8044% ( 2) 00:12:06.122 6.912 - 6.963: 99.8103% ( 1) 00:12:06.122 6.963 - 7.014: 99.8163% ( 1) 00:12:06.122 7.066 - 7.117: 99.8222% ( 1) 00:12:06.123 7.168 - 7.219: 99.8281% ( 1) 00:12:06.123 7.322 - 7.373: 99.8459% ( 3) 00:12:06.123 7.373 - 7.424: 99.8518% ( 1) 00:12:06.123 7.424 - 7.475: 99.8696% ( 3) 00:12:06.123 7.526 - 7.578: 99.8815% ( 2) 00:12:06.123 7.731 - 7.782: 99.8874% ( 1) 00:12:06.123 7.782 - 7.834: 99.8933% ( 1) 00:12:06.123 7.987 - 8.038: 99.8992% ( 1) 00:12:06.123 8.243 - 8.294: 99.9052% ( 1) 00:12:06.123 8.397 - 8.448: 99.9111% ( 1) 00:12:06.123 11.418 - 11.469: 99.9170% ( 1) 00:12:06.123 3984.589 - 4010.803: 100.0000% ( 14) 00:12:06.123 00:12:06.123 Complete histogram 00:12:06.123 ================== 00:12:06.123 Range in us Cumulative Count 00:12:06.123 1.715 - 1.728: 0.1067% ( 18) 00:12:06.123 1.728 - 1.741: 1.0965% ( 167) 00:12:06.123 1.741 - 1.754: 2.3471% ( 211) 00:12:06.123 1.754 - 1.766: 2.9279% ( 98) 00:12:06.123 1.766 - 1.779: 3.9592% ( 174) 00:12:06.123 1.779 - 1.792: 28.7992% ( 4191) 00:12:06.123 1.792 - [2024-05-14 23:54:06.332626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.123 1.805: 77.9161% ( 8287) 00:12:06.123 1.805 - 1.818: 88.8928% ( 1852) 00:12:06.123 1.818 - 1.830: 94.1856% ( 893) 00:12:06.123 1.830 - 1.843: 96.2127% ( 342) 00:12:06.123 1.843 - 1.856: 96.8172% ( 102) 00:12:06.123 1.856 - 1.869: 98.0856% ( 214) 00:12:06.123 1.869 - 1.882: 99.0458% ( 162) 00:12:06.123 1.882 - 1.894: 99.2828% ( 40) 00:12:06.123 1.894 - 1.907: 99.3243% ( 7) 00:12:06.123 1.907 - 1.920: 99.3540% ( 5) 00:12:06.123 1.920 - 1.933: 99.3658% ( 2) 00:12:06.123 1.933 - 1.946: 99.3717% ( 1) 00:12:06.123 1.946 - 1.958: 99.3777% ( 1) 00:12:06.123 1.971 - 1.984: 99.3836% ( 1) 00:12:06.123 1.997 - 2.010: 99.3895% ( 1) 00:12:06.123 4.070 - 4.096: 99.3954% ( 1) 00:12:06.123 4.403 - 4.429: 99.4073% ( 2) 00:12:06.123 4.454 - 4.480: 99.4132% ( 1) 00:12:06.123 4.480 - 4.506: 99.4192% ( 1) 00:12:06.123 4.634 - 4.659: 99.4251% ( 1) 00:12:06.123 4.659 - 4.685: 99.4310% ( 1) 00:12:06.123 4.736 - 4.762: 99.4369% ( 1) 00:12:06.123 4.838 - 4.864: 99.4429% ( 1) 00:12:06.123 4.890 - 4.915: 99.4547% ( 2) 00:12:06.123 4.992 - 5.018: 99.4606% ( 1) 00:12:06.123 5.018 - 5.043: 99.4725% ( 2) 00:12:06.123 5.171 - 5.197: 99.4784% ( 1) 00:12:06.123 5.197 - 5.222: 99.4844% ( 1) 00:12:06.123 5.222 - 5.248: 99.4903% ( 1) 00:12:06.123 5.350 - 5.376: 99.4962% ( 1) 00:12:06.123 5.376 - 5.402: 99.5081% ( 2) 00:12:06.123 5.427 - 5.453: 99.5140% ( 1) 00:12:06.123 5.606 - 5.632: 99.5199% ( 1) 00:12:06.123 5.632 - 5.658: 99.5318% ( 2) 00:12:06.123 5.658 - 5.683: 99.5377% ( 1) 00:12:06.123 5.709 - 5.734: 99.5436% ( 1) 00:12:06.123 5.862 - 5.888: 99.5495% ( 1) 00:12:06.123 5.914 - 5.939: 99.5614% ( 2) 00:12:06.123 6.195 - 6.221: 99.5673% ( 1) 00:12:06.123 6.451 - 6.477: 99.5733% ( 1) 00:12:06.123 10.650 - 10.701: 99.5792% ( 1) 00:12:06.123 3984.589 - 4010.803: 99.9941% ( 70) 00:12:06.123 5976.883 - 6003.098: 100.0000% ( 1) 00:12:06.123 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:06.123 [ 00:12:06.123 { 00:12:06.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:06.123 "subtype": "Discovery", 00:12:06.123 "listen_addresses": [], 00:12:06.123 "allow_any_host": true, 00:12:06.123 "hosts": [] 00:12:06.123 }, 00:12:06.123 { 00:12:06.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:06.123 "subtype": "NVMe", 00:12:06.123 "listen_addresses": [ 00:12:06.123 { 00:12:06.123 "trtype": "VFIOUSER", 00:12:06.123 "adrfam": "IPv4", 00:12:06.123 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:06.123 "trsvcid": "0" 00:12:06.123 } 00:12:06.123 ], 00:12:06.123 "allow_any_host": true, 00:12:06.123 "hosts": [], 00:12:06.123 "serial_number": "SPDK1", 00:12:06.123 "model_number": "SPDK bdev Controller", 00:12:06.123 "max_namespaces": 32, 00:12:06.123 "min_cntlid": 1, 00:12:06.123 "max_cntlid": 65519, 00:12:06.123 "namespaces": [ 00:12:06.123 { 00:12:06.123 "nsid": 1, 00:12:06.123 "bdev_name": "Malloc1", 00:12:06.123 "name": "Malloc1", 00:12:06.123 "nguid": "90665097C32C43378C85333A355BB910", 00:12:06.123 "uuid": "90665097-c32c-4337-8c85-333a355bb910" 00:12:06.123 } 00:12:06.123 ] 00:12:06.123 }, 00:12:06.123 { 00:12:06.123 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:06.123 "subtype": "NVMe", 00:12:06.123 "listen_addresses": [ 00:12:06.123 { 00:12:06.123 "trtype": "VFIOUSER", 00:12:06.123 "adrfam": "IPv4", 00:12:06.123 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:06.123 "trsvcid": "0" 00:12:06.123 } 00:12:06.123 ], 00:12:06.123 "allow_any_host": true, 00:12:06.123 "hosts": [], 00:12:06.123 "serial_number": "SPDK2", 00:12:06.123 "model_number": "SPDK bdev Controller", 00:12:06.123 "max_namespaces": 32, 00:12:06.123 "min_cntlid": 1, 00:12:06.123 "max_cntlid": 65519, 00:12:06.123 "namespaces": [ 00:12:06.123 { 00:12:06.123 "nsid": 1, 00:12:06.123 "bdev_name": "Malloc2", 00:12:06.123 "name": "Malloc2", 00:12:06.123 "nguid": "F8A259BE5C9246F2A6FE8AF2B07FD890", 00:12:06.123 "uuid": "f8a259be-5c92-46f2-a6fe-8af2b07fd890" 00:12:06.123 } 00:12:06.123 ] 00:12:06.123 } 00:12:06.123 ] 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3511297 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:06.123 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:06.123 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.123 [2024-05-14 23:54:06.712605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.382 Malloc3 00:12:06.382 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:06.382 [2024-05-14 23:54:06.922135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.382 23:54:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:06.382 Asynchronous Event Request test 00:12:06.382 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.382 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:06.382 Registering asynchronous event callbacks... 00:12:06.382 Starting namespace attribute notice tests for all controllers... 00:12:06.382 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:06.382 aer_cb - Changed Namespace 00:12:06.382 Cleaning up... 00:12:06.641 [ 00:12:06.641 { 00:12:06.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:06.641 "subtype": "Discovery", 00:12:06.641 "listen_addresses": [], 00:12:06.641 "allow_any_host": true, 00:12:06.641 "hosts": [] 00:12:06.641 }, 00:12:06.641 { 00:12:06.641 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:06.641 "subtype": "NVMe", 00:12:06.641 "listen_addresses": [ 00:12:06.641 { 00:12:06.641 "trtype": "VFIOUSER", 00:12:06.641 "adrfam": "IPv4", 00:12:06.641 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:06.641 "trsvcid": "0" 00:12:06.641 } 00:12:06.641 ], 00:12:06.641 "allow_any_host": true, 00:12:06.641 "hosts": [], 00:12:06.641 "serial_number": "SPDK1", 00:12:06.641 "model_number": "SPDK bdev Controller", 00:12:06.641 "max_namespaces": 32, 00:12:06.641 "min_cntlid": 1, 00:12:06.641 "max_cntlid": 65519, 00:12:06.641 "namespaces": [ 00:12:06.641 { 00:12:06.641 "nsid": 1, 00:12:06.641 "bdev_name": "Malloc1", 00:12:06.641 "name": "Malloc1", 00:12:06.641 "nguid": "90665097C32C43378C85333A355BB910", 00:12:06.641 "uuid": "90665097-c32c-4337-8c85-333a355bb910" 00:12:06.641 }, 00:12:06.641 { 00:12:06.641 "nsid": 2, 00:12:06.641 "bdev_name": "Malloc3", 00:12:06.641 "name": "Malloc3", 00:12:06.641 "nguid": "2ED90B2063034D9B85B94C1105C5C24E", 00:12:06.641 "uuid": "2ed90b20-6303-4d9b-85b9-4c1105c5c24e" 00:12:06.641 } 00:12:06.641 ] 00:12:06.641 }, 00:12:06.641 { 00:12:06.641 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:06.641 "subtype": "NVMe", 00:12:06.641 "listen_addresses": [ 00:12:06.641 { 00:12:06.641 "trtype": "VFIOUSER", 00:12:06.641 "adrfam": "IPv4", 00:12:06.641 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:06.641 "trsvcid": "0" 00:12:06.641 } 00:12:06.641 ], 00:12:06.641 "allow_any_host": true, 00:12:06.641 "hosts": [], 00:12:06.641 "serial_number": "SPDK2", 00:12:06.641 "model_number": "SPDK bdev Controller", 00:12:06.641 "max_namespaces": 32, 00:12:06.641 "min_cntlid": 1, 00:12:06.641 "max_cntlid": 65519, 00:12:06.641 "namespaces": [ 00:12:06.641 { 00:12:06.641 "nsid": 1, 00:12:06.641 "bdev_name": "Malloc2", 00:12:06.641 "name": "Malloc2", 00:12:06.641 "nguid": "F8A259BE5C9246F2A6FE8AF2B07FD890", 00:12:06.641 "uuid": "f8a259be-5c92-46f2-a6fe-8af2b07fd890" 00:12:06.641 } 00:12:06.641 ] 00:12:06.641 } 00:12:06.641 ] 00:12:06.641 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3511297 00:12:06.641 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.641 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:06.641 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:06.641 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:06.641 [2024-05-14 23:54:07.160899] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:06.641 [2024-05-14 23:54:07.160937] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511542 ] 00:12:06.641 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.641 [2024-05-14 23:54:07.191398] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:06.641 [2024-05-14 23:54:07.199414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:06.641 [2024-05-14 23:54:07.199437] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9edfab2000 00:12:06.641 [2024-05-14 23:54:07.200413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.201416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.202425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.203433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.204439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.205451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.206454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.207471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:06.641 [2024-05-14 23:54:07.208482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:06.641 [2024-05-14 23:54:07.208497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9edfaa7000 00:12:06.641 [2024-05-14 23:54:07.209391] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:06.641 [2024-05-14 23:54:07.222600] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:06.641 [2024-05-14 23:54:07.222625] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:06.641 [2024-05-14 23:54:07.224676] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:06.641 [2024-05-14 23:54:07.224718] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:06.641 [2024-05-14 23:54:07.224789] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:06.641 [2024-05-14 23:54:07.224805] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:06.641 [2024-05-14 23:54:07.224811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:06.641 [2024-05-14 23:54:07.225677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:06.641 [2024-05-14 23:54:07.225688] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:06.641 [2024-05-14 23:54:07.225697] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:06.641 [2024-05-14 23:54:07.226684] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:06.641 [2024-05-14 23:54:07.226694] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:06.641 [2024-05-14 23:54:07.226703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:06.641 [2024-05-14 23:54:07.227693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:06.641 [2024-05-14 23:54:07.227704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:06.641 [2024-05-14 23:54:07.228694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:06.641 [2024-05-14 23:54:07.228704] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:06.641 [2024-05-14 23:54:07.228710] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:06.641 [2024-05-14 23:54:07.228719] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:06.641 [2024-05-14 23:54:07.228825] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:06.641 [2024-05-14 23:54:07.228831] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:06.641 [2024-05-14 23:54:07.228840] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:06.641 [2024-05-14 23:54:07.229705] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:06.641 [2024-05-14 23:54:07.230710] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:06.641 [2024-05-14 23:54:07.231720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:06.641 [2024-05-14 23:54:07.232723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:06.641 [2024-05-14 23:54:07.232764] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:06.901 [2024-05-14 23:54:07.233734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:06.901 [2024-05-14 23:54:07.233745] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:06.901 [2024-05-14 23:54:07.233751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:06.901 [2024-05-14 23:54:07.233770] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:06.901 [2024-05-14 23:54:07.233779] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:06.901 [2024-05-14 23:54:07.233792] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.901 [2024-05-14 23:54:07.233799] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.901 [2024-05-14 23:54:07.233812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.240199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.240212] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:06.902 [2024-05-14 23:54:07.240219] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:06.902 [2024-05-14 23:54:07.240224] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:06.902 [2024-05-14 23:54:07.240230] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:06.902 [2024-05-14 23:54:07.240236] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:06.902 [2024-05-14 23:54:07.240242] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:06.902 [2024-05-14 23:54:07.240249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.240260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.240273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.248195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.248211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.902 [2024-05-14 23:54:07.248220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.902 [2024-05-14 23:54:07.248229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.902 [2024-05-14 23:54:07.248238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.902 [2024-05-14 23:54:07.248244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.248255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.248264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.256197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.256206] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:06.902 [2024-05-14 23:54:07.256212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.256221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.256230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.256239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.264197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.264241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.264251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.264259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:06.902 [2024-05-14 23:54:07.264265] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:06.902 [2024-05-14 23:54:07.264272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.272221] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:06.902 [2024-05-14 23:54:07.272235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.272243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.272251] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.902 [2024-05-14 23:54:07.272257] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.902 [2024-05-14 23:54:07.272266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.280197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.280211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.280220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.280228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:06.902 [2024-05-14 23:54:07.280234] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.902 [2024-05-14 23:54:07.280241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.288197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.288212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288236] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288249] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:06.902 [2024-05-14 23:54:07.288255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:06.902 [2024-05-14 23:54:07.288261] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:06.902 [2024-05-14 23:54:07.288282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.296197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.296221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.304196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.304212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.312197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.312211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.320199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.320216] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:06.902 [2024-05-14 23:54:07.320222] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:06.902 [2024-05-14 23:54:07.320229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:06.902 [2024-05-14 23:54:07.320234] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:06.902 [2024-05-14 23:54:07.320241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:06.902 [2024-05-14 23:54:07.320249] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:06.902 [2024-05-14 23:54:07.320255] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:06.902 [2024-05-14 23:54:07.320261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.320269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:06.902 [2024-05-14 23:54:07.320275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:06.902 [2024-05-14 23:54:07.320282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.320292] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:06.902 [2024-05-14 23:54:07.320298] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:06.902 [2024-05-14 23:54:07.320304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:06.902 [2024-05-14 23:54:07.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.328218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.328229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:06.902 [2024-05-14 23:54:07.328240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:06.902 ===================================================== 00:12:06.902 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:06.902 ===================================================== 00:12:06.902 Controller Capabilities/Features 00:12:06.902 ================================ 00:12:06.902 Vendor ID: 4e58 00:12:06.902 Subsystem Vendor ID: 4e58 00:12:06.902 Serial Number: SPDK2 00:12:06.902 Model Number: SPDK bdev Controller 00:12:06.902 Firmware Version: 24.05 00:12:06.903 Recommended Arb Burst: 6 00:12:06.903 IEEE OUI Identifier: 8d 6b 50 00:12:06.903 Multi-path I/O 00:12:06.903 May have multiple subsystem ports: Yes 00:12:06.903 May have multiple controllers: Yes 00:12:06.903 Associated with SR-IOV VF: No 00:12:06.903 Max Data Transfer Size: 131072 00:12:06.903 Max Number of Namespaces: 32 00:12:06.903 Max Number of I/O Queues: 127 00:12:06.903 NVMe Specification Version (VS): 1.3 00:12:06.903 NVMe Specification Version (Identify): 1.3 00:12:06.903 Maximum Queue Entries: 256 00:12:06.903 Contiguous Queues Required: Yes 00:12:06.903 Arbitration Mechanisms Supported 00:12:06.903 Weighted Round Robin: Not Supported 00:12:06.903 Vendor Specific: Not Supported 00:12:06.903 Reset Timeout: 15000 ms 00:12:06.903 Doorbell Stride: 4 bytes 00:12:06.903 NVM Subsystem Reset: Not Supported 00:12:06.903 Command Sets Supported 00:12:06.903 NVM Command Set: Supported 00:12:06.903 Boot Partition: Not Supported 00:12:06.903 Memory Page Size Minimum: 4096 bytes 00:12:06.903 Memory Page Size Maximum: 4096 bytes 00:12:06.903 Persistent Memory Region: Not Supported 00:12:06.903 Optional Asynchronous Events Supported 00:12:06.903 Namespace Attribute Notices: Supported 00:12:06.903 Firmware Activation Notices: Not Supported 00:12:06.903 ANA Change Notices: Not Supported 00:12:06.903 PLE Aggregate Log Change Notices: Not Supported 00:12:06.903 LBA Status Info Alert Notices: Not Supported 00:12:06.903 EGE Aggregate Log Change Notices: Not Supported 00:12:06.903 Normal NVM Subsystem Shutdown event: Not Supported 00:12:06.903 Zone Descriptor Change Notices: Not Supported 00:12:06.903 Discovery Log Change Notices: Not Supported 00:12:06.903 Controller Attributes 00:12:06.903 128-bit Host Identifier: Supported 00:12:06.903 Non-Operational Permissive Mode: Not Supported 00:12:06.903 NVM Sets: Not Supported 00:12:06.903 Read Recovery Levels: Not Supported 00:12:06.903 Endurance Groups: Not Supported 00:12:06.903 Predictable Latency Mode: Not Supported 00:12:06.903 Traffic Based Keep ALive: Not Supported 00:12:06.903 Namespace Granularity: Not Supported 00:12:06.903 SQ Associations: Not Supported 00:12:06.903 UUID List: Not Supported 00:12:06.903 Multi-Domain Subsystem: Not Supported 00:12:06.903 Fixed Capacity Management: Not Supported 00:12:06.903 Variable Capacity Management: Not Supported 00:12:06.903 Delete Endurance Group: Not Supported 00:12:06.903 Delete NVM Set: Not Supported 00:12:06.903 Extended LBA Formats Supported: Not Supported 00:12:06.903 Flexible Data Placement Supported: Not Supported 00:12:06.903 00:12:06.903 Controller Memory Buffer Support 00:12:06.903 ================================ 00:12:06.903 Supported: No 00:12:06.903 00:12:06.903 Persistent Memory Region Support 00:12:06.903 ================================ 00:12:06.903 Supported: No 00:12:06.903 00:12:06.903 Admin Command Set Attributes 00:12:06.903 ============================ 00:12:06.903 Security Send/Receive: Not Supported 00:12:06.903 Format NVM: Not Supported 00:12:06.903 Firmware Activate/Download: Not Supported 00:12:06.903 Namespace Management: Not Supported 00:12:06.903 Device Self-Test: Not Supported 00:12:06.903 Directives: Not Supported 00:12:06.903 NVMe-MI: Not Supported 00:12:06.903 Virtualization Management: Not Supported 00:12:06.903 Doorbell Buffer Config: Not Supported 00:12:06.903 Get LBA Status Capability: Not Supported 00:12:06.903 Command & Feature Lockdown Capability: Not Supported 00:12:06.903 Abort Command Limit: 4 00:12:06.903 Async Event Request Limit: 4 00:12:06.903 Number of Firmware Slots: N/A 00:12:06.903 Firmware Slot 1 Read-Only: N/A 00:12:06.903 Firmware Activation Without Reset: N/A 00:12:06.903 Multiple Update Detection Support: N/A 00:12:06.903 Firmware Update Granularity: No Information Provided 00:12:06.903 Per-Namespace SMART Log: No 00:12:06.903 Asymmetric Namespace Access Log Page: Not Supported 00:12:06.903 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:06.903 Command Effects Log Page: Supported 00:12:06.903 Get Log Page Extended Data: Supported 00:12:06.903 Telemetry Log Pages: Not Supported 00:12:06.903 Persistent Event Log Pages: Not Supported 00:12:06.903 Supported Log Pages Log Page: May Support 00:12:06.903 Commands Supported & Effects Log Page: Not Supported 00:12:06.903 Feature Identifiers & Effects Log Page:May Support 00:12:06.903 NVMe-MI Commands & Effects Log Page: May Support 00:12:06.903 Data Area 4 for Telemetry Log: Not Supported 00:12:06.903 Error Log Page Entries Supported: 128 00:12:06.903 Keep Alive: Supported 00:12:06.903 Keep Alive Granularity: 10000 ms 00:12:06.903 00:12:06.903 NVM Command Set Attributes 00:12:06.903 ========================== 00:12:06.903 Submission Queue Entry Size 00:12:06.903 Max: 64 00:12:06.903 Min: 64 00:12:06.903 Completion Queue Entry Size 00:12:06.903 Max: 16 00:12:06.903 Min: 16 00:12:06.903 Number of Namespaces: 32 00:12:06.903 Compare Command: Supported 00:12:06.903 Write Uncorrectable Command: Not Supported 00:12:06.903 Dataset Management Command: Supported 00:12:06.903 Write Zeroes Command: Supported 00:12:06.903 Set Features Save Field: Not Supported 00:12:06.903 Reservations: Not Supported 00:12:06.903 Timestamp: Not Supported 00:12:06.903 Copy: Supported 00:12:06.903 Volatile Write Cache: Present 00:12:06.903 Atomic Write Unit (Normal): 1 00:12:06.903 Atomic Write Unit (PFail): 1 00:12:06.903 Atomic Compare & Write Unit: 1 00:12:06.903 Fused Compare & Write: Supported 00:12:06.903 Scatter-Gather List 00:12:06.903 SGL Command Set: Supported (Dword aligned) 00:12:06.903 SGL Keyed: Not Supported 00:12:06.903 SGL Bit Bucket Descriptor: Not Supported 00:12:06.903 SGL Metadata Pointer: Not Supported 00:12:06.903 Oversized SGL: Not Supported 00:12:06.903 SGL Metadata Address: Not Supported 00:12:06.903 SGL Offset: Not Supported 00:12:06.903 Transport SGL Data Block: Not Supported 00:12:06.903 Replay Protected Memory Block: Not Supported 00:12:06.903 00:12:06.903 Firmware Slot Information 00:12:06.903 ========================= 00:12:06.903 Active slot: 1 00:12:06.903 Slot 1 Firmware Revision: 24.05 00:12:06.903 00:12:06.903 00:12:06.903 Commands Supported and Effects 00:12:06.903 ============================== 00:12:06.903 Admin Commands 00:12:06.903 -------------- 00:12:06.903 Get Log Page (02h): Supported 00:12:06.903 Identify (06h): Supported 00:12:06.903 Abort (08h): Supported 00:12:06.903 Set Features (09h): Supported 00:12:06.903 Get Features (0Ah): Supported 00:12:06.903 Asynchronous Event Request (0Ch): Supported 00:12:06.903 Keep Alive (18h): Supported 00:12:06.903 I/O Commands 00:12:06.903 ------------ 00:12:06.903 Flush (00h): Supported LBA-Change 00:12:06.903 Write (01h): Supported LBA-Change 00:12:06.903 Read (02h): Supported 00:12:06.903 Compare (05h): Supported 00:12:06.903 Write Zeroes (08h): Supported LBA-Change 00:12:06.903 Dataset Management (09h): Supported LBA-Change 00:12:06.903 Copy (19h): Supported LBA-Change 00:12:06.903 Unknown (79h): Supported LBA-Change 00:12:06.903 Unknown (7Ah): Supported 00:12:06.903 00:12:06.903 Error Log 00:12:06.903 ========= 00:12:06.903 00:12:06.903 Arbitration 00:12:06.903 =========== 00:12:06.903 Arbitration Burst: 1 00:12:06.903 00:12:06.903 Power Management 00:12:06.903 ================ 00:12:06.903 Number of Power States: 1 00:12:06.903 Current Power State: Power State #0 00:12:06.903 Power State #0: 00:12:06.903 Max Power: 0.00 W 00:12:06.903 Non-Operational State: Operational 00:12:06.903 Entry Latency: Not Reported 00:12:06.903 Exit Latency: Not Reported 00:12:06.903 Relative Read Throughput: 0 00:12:06.903 Relative Read Latency: 0 00:12:06.903 Relative Write Throughput: 0 00:12:06.903 Relative Write Latency: 0 00:12:06.903 Idle Power: Not Reported 00:12:06.903 Active Power: Not Reported 00:12:06.903 Non-Operational Permissive Mode: Not Supported 00:12:06.903 00:12:06.903 Health Information 00:12:06.903 ================== 00:12:06.903 Critical Warnings: 00:12:06.903 Available Spare Space: OK 00:12:06.903 Temperature: OK 00:12:06.903 Device Reliability: OK 00:12:06.903 Read Only: No 00:12:06.903 Volatile Memory Backup: OK 00:12:06.903 Current Temperature: 0 Kelvin (-2[2024-05-14 23:54:07.328331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:06.903 [2024-05-14 23:54:07.336199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:06.903 [2024-05-14 23:54:07.336229] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:06.903 [2024-05-14 23:54:07.336239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.904 [2024-05-14 23:54:07.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.904 [2024-05-14 23:54:07.336255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.904 [2024-05-14 23:54:07.336263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.904 [2024-05-14 23:54:07.336317] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:06.904 [2024-05-14 23:54:07.336329] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:06.904 [2024-05-14 23:54:07.337317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:06.904 [2024-05-14 23:54:07.337362] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:06.904 [2024-05-14 23:54:07.337372] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:06.904 [2024-05-14 23:54:07.338322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:06.904 [2024-05-14 23:54:07.338334] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:06.904 [2024-05-14 23:54:07.338381] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:06.904 [2024-05-14 23:54:07.341197] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:06.904 73 Celsius) 00:12:06.904 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:06.904 Available Spare: 0% 00:12:06.904 Available Spare Threshold: 0% 00:12:06.904 Life Percentage Used: 0% 00:12:06.904 Data Units Read: 0 00:12:06.904 Data Units Written: 0 00:12:06.904 Host Read Commands: 0 00:12:06.904 Host Write Commands: 0 00:12:06.904 Controller Busy Time: 0 minutes 00:12:06.904 Power Cycles: 0 00:12:06.904 Power On Hours: 0 hours 00:12:06.904 Unsafe Shutdowns: 0 00:12:06.904 Unrecoverable Media Errors: 0 00:12:06.904 Lifetime Error Log Entries: 0 00:12:06.904 Warning Temperature Time: 0 minutes 00:12:06.904 Critical Temperature Time: 0 minutes 00:12:06.904 00:12:06.904 Number of Queues 00:12:06.904 ================ 00:12:06.904 Number of I/O Submission Queues: 127 00:12:06.904 Number of I/O Completion Queues: 127 00:12:06.904 00:12:06.904 Active Namespaces 00:12:06.904 ================= 00:12:06.904 Namespace ID:1 00:12:06.904 Error Recovery Timeout: Unlimited 00:12:06.904 Command Set Identifier: NVM (00h) 00:12:06.904 Deallocate: Supported 00:12:06.904 Deallocated/Unwritten Error: Not Supported 00:12:06.904 Deallocated Read Value: Unknown 00:12:06.904 Deallocate in Write Zeroes: Not Supported 00:12:06.904 Deallocated Guard Field: 0xFFFF 00:12:06.904 Flush: Supported 00:12:06.904 Reservation: Supported 00:12:06.904 Namespace Sharing Capabilities: Multiple Controllers 00:12:06.904 Size (in LBAs): 131072 (0GiB) 00:12:06.904 Capacity (in LBAs): 131072 (0GiB) 00:12:06.904 Utilization (in LBAs): 131072 (0GiB) 00:12:06.904 NGUID: F8A259BE5C9246F2A6FE8AF2B07FD890 00:12:06.904 UUID: f8a259be-5c92-46f2-a6fe-8af2b07fd890 00:12:06.904 Thin Provisioning: Not Supported 00:12:06.904 Per-NS Atomic Units: Yes 00:12:06.904 Atomic Boundary Size (Normal): 0 00:12:06.904 Atomic Boundary Size (PFail): 0 00:12:06.904 Atomic Boundary Offset: 0 00:12:06.904 Maximum Single Source Range Length: 65535 00:12:06.904 Maximum Copy Length: 65535 00:12:06.904 Maximum Source Range Count: 1 00:12:06.904 NGUID/EUI64 Never Reused: No 00:12:06.904 Namespace Write Protected: No 00:12:06.904 Number of LBA Formats: 1 00:12:06.904 Current LBA Format: LBA Format #00 00:12:06.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:06.904 00:12:06.904 23:54:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:06.904 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.163 [2024-05-14 23:54:07.548202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:12.434 Initializing NVMe Controllers 00:12:12.434 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:12.434 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:12.434 Initialization complete. Launching workers. 00:12:12.434 ======================================================== 00:12:12.434 Latency(us) 00:12:12.434 Device Information : IOPS MiB/s Average min max 00:12:12.434 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.98 156.01 3204.58 902.84 7673.20 00:12:12.434 ======================================================== 00:12:12.434 Total : 39937.98 156.01 3204.58 902.84 7673.20 00:12:12.434 00:12:12.434 [2024-05-14 23:54:12.656447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:12.434 23:54:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:12.434 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.434 [2024-05-14 23:54:12.874083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:17.708 Initializing NVMe Controllers 00:12:17.708 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:17.708 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:17.708 Initialization complete. Launching workers. 00:12:17.708 ======================================================== 00:12:17.708 Latency(us) 00:12:17.708 Device Information : IOPS MiB/s Average min max 00:12:17.708 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.28 156.01 3204.85 929.52 7083.98 00:12:17.708 ======================================================== 00:12:17.708 Total : 39937.28 156.01 3204.85 929.52 7083.98 00:12:17.708 00:12:17.708 [2024-05-14 23:54:17.894856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:17.708 23:54:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:17.708 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.708 [2024-05-14 23:54:18.108932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:22.990 [2024-05-14 23:54:23.244497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:22.990 Initializing NVMe Controllers 00:12:22.990 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:22.990 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:22.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:22.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:22.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:22.990 Initialization complete. Launching workers. 00:12:22.990 Starting thread on core 2 00:12:22.990 Starting thread on core 3 00:12:22.990 Starting thread on core 1 00:12:22.990 23:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:22.990 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.990 [2024-05-14 23:54:23.544639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.286 [2024-05-14 23:54:26.632313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.286 Initializing NVMe Controllers 00:12:26.286 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.286 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:26.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:26.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:26.286 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:26.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:26.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:26.286 Initialization complete. Launching workers. 00:12:26.286 Starting thread on core 1 with urgent priority queue 00:12:26.286 Starting thread on core 2 with urgent priority queue 00:12:26.286 Starting thread on core 3 with urgent priority queue 00:12:26.286 Starting thread on core 0 with urgent priority queue 00:12:26.286 SPDK bdev Controller (SPDK2 ) core 0: 7163.67 IO/s 13.96 secs/100000 ios 00:12:26.286 SPDK bdev Controller (SPDK2 ) core 1: 7891.67 IO/s 12.67 secs/100000 ios 00:12:26.286 SPDK bdev Controller (SPDK2 ) core 2: 7803.00 IO/s 12.82 secs/100000 ios 00:12:26.286 SPDK bdev Controller (SPDK2 ) core 3: 12262.00 IO/s 8.16 secs/100000 ios 00:12:26.286 ======================================================== 00:12:26.286 00:12:26.286 23:54:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:26.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.584 [2024-05-14 23:54:26.923567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.584 Initializing NVMe Controllers 00:12:26.584 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.584 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.584 Namespace ID: 1 size: 0GB 00:12:26.584 Initialization complete. 00:12:26.584 INFO: using host memory buffer for IO 00:12:26.584 Hello world! 00:12:26.584 [2024-05-14 23:54:26.935641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.584 23:54:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:26.584 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.875 [2024-05-14 23:54:27.221435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:27.812 Initializing NVMe Controllers 00:12:27.812 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.812 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:27.812 Initialization complete. Launching workers. 00:12:27.812 submit (in ns) avg, min, max = 6139.7, 3092.0, 3999037.6 00:12:27.812 complete (in ns) avg, min, max = 20402.6, 1712.0, 4994145.6 00:12:27.812 00:12:27.812 Submit histogram 00:12:27.812 ================ 00:12:27.812 Range in us Cumulative Count 00:12:27.812 3.085 - 3.098: 0.1012% ( 17) 00:12:27.812 3.098 - 3.110: 0.7563% ( 110) 00:12:27.812 3.110 - 3.123: 2.0782% ( 222) 00:12:27.812 3.123 - 3.136: 4.2339% ( 362) 00:12:27.812 3.136 - 3.149: 7.5984% ( 565) 00:12:27.812 3.149 - 3.162: 11.4036% ( 639) 00:12:27.812 3.162 - 3.174: 17.3584% ( 1000) 00:12:27.812 3.174 - 3.187: 23.4383% ( 1021) 00:12:27.812 3.187 - 3.200: 29.3932% ( 1000) 00:12:27.812 3.200 - 3.213: 35.9019% ( 1093) 00:12:27.812 3.213 - 3.226: 41.9639% ( 1018) 00:12:27.812 3.226 - 3.238: 49.3301% ( 1237) 00:12:27.812 3.238 - 3.251: 54.9157% ( 938) 00:12:27.812 3.251 - 3.264: 58.3576% ( 578) 00:12:27.812 3.264 - 3.277: 60.7872% ( 408) 00:12:27.812 3.277 - 3.302: 66.8493% ( 1018) 00:12:27.812 3.302 - 3.328: 71.3273% ( 752) 00:12:27.812 3.328 - 3.354: 76.6450% ( 893) 00:12:27.812 3.354 - 3.379: 85.0235% ( 1407) 00:12:27.812 3.379 - 3.405: 87.6794% ( 446) 00:12:27.812 3.405 - 3.430: 88.5548% ( 147) 00:12:27.812 3.430 - 3.456: 89.1979% ( 108) 00:12:27.812 3.456 - 3.482: 90.3650% ( 196) 00:12:27.812 3.482 - 3.507: 92.0205% ( 278) 00:12:27.812 3.507 - 3.533: 93.8844% ( 313) 00:12:27.812 3.533 - 3.558: 95.2004% ( 221) 00:12:27.812 3.558 - 3.584: 96.3080% ( 186) 00:12:27.812 3.584 - 3.610: 97.2608% ( 160) 00:12:27.812 3.610 - 3.635: 98.2552% ( 167) 00:12:27.812 3.635 - 3.661: 98.8031% ( 92) 00:12:27.812 3.661 - 3.686: 99.1663% ( 61) 00:12:27.812 3.686 - 3.712: 99.4164% ( 42) 00:12:27.812 3.712 - 3.738: 99.6010% ( 31) 00:12:27.812 3.738 - 3.763: 99.6546% ( 9) 00:12:27.812 3.763 - 3.789: 99.6665% ( 2) 00:12:27.812 3.789 - 3.814: 99.6725% ( 1) 00:12:27.812 3.840 - 3.866: 99.6784% ( 1) 00:12:27.812 3.866 - 3.891: 99.6844% ( 1) 00:12:27.812 4.173 - 4.198: 99.6903% ( 1) 00:12:27.812 4.275 - 4.301: 99.6963% ( 1) 00:12:27.812 5.197 - 5.222: 99.7023% ( 1) 00:12:27.812 5.222 - 5.248: 99.7082% ( 1) 00:12:27.812 5.299 - 5.325: 99.7142% ( 1) 00:12:27.812 5.504 - 5.530: 99.7201% ( 1) 00:12:27.812 5.555 - 5.581: 99.7261% ( 1) 00:12:27.812 5.606 - 5.632: 99.7320% ( 1) 00:12:27.812 5.658 - 5.683: 99.7439% ( 2) 00:12:27.812 5.709 - 5.734: 99.7499% ( 1) 00:12:27.812 5.734 - 5.760: 99.7559% ( 1) 00:12:27.812 5.811 - 5.837: 99.7618% ( 1) 00:12:27.812 5.888 - 5.914: 99.7678% ( 1) 00:12:27.812 5.914 - 5.939: 99.7737% ( 1) 00:12:27.812 5.939 - 5.965: 99.7797% ( 1) 00:12:27.812 5.965 - 5.990: 99.7856% ( 1) 00:12:27.812 6.016 - 6.042: 99.7916% ( 1) 00:12:27.812 6.093 - 6.118: 99.7975% ( 1) 00:12:27.812 6.118 - 6.144: 99.8094% ( 2) 00:12:27.812 6.170 - 6.195: 99.8154% ( 1) 00:12:27.812 6.221 - 6.246: 99.8214% ( 1) 00:12:27.812 6.246 - 6.272: 99.8392% ( 3) 00:12:27.812 6.298 - 6.323: 99.8511% ( 2) 00:12:27.812 6.349 - 6.374: 99.8571% ( 1) 00:12:27.812 6.400 - 6.426: 99.8630% ( 1) 00:12:27.812 6.451 - 6.477: 99.8690% ( 1) 00:12:27.812 6.758 - 6.810: 99.8809% ( 2) 00:12:27.812 7.014 - 7.066: 99.8869% ( 1) 00:12:27.812 7.117 - 7.168: 99.8928% ( 1) 00:12:27.812 7.219 - 7.270: 99.9047% ( 2) 00:12:27.812 7.373 - 7.424: 99.9107% ( 1) 00:12:27.812 7.424 - 7.475: 99.9166% ( 1) 00:12:27.812 11.571 - 11.622: 99.9226% ( 1) 00:12:27.812 14.234 - 14.336: 99.9285% ( 1) 00:12:27.812 3984.589 - 4010.803: 100.0000% ( 12) 00:12:27.812 00:12:27.812 Complete histogram 00:12:27.812 ================== 00:12:27.812 Range in us Cumulative Count 00:12:27.812 1.702 - 1.715: 0.0179% ( 3) 00:12:27.812 1.715 - 1.728: 0.8754% ( 144) 00:12:27.812 1.728 - 1.741: 6.0620% ( 871) 00:12:27.812 1.741 - 1.754: 8.4440% ( 400) 00:12:27.812 1.754 - [2024-05-14 23:54:28.317058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:27.812 1.766: 9.5933% ( 193) 00:12:27.812 1.766 - 1.779: 28.1010% ( 3108) 00:12:27.812 1.779 - 1.792: 76.1508% ( 8069) 00:12:27.812 1.792 - 1.805: 89.2515% ( 2200) 00:12:27.812 1.805 - 1.818: 94.6704% ( 910) 00:12:27.812 1.818 - 1.830: 97.1059% ( 409) 00:12:27.812 1.830 - 1.843: 97.6300% ( 88) 00:12:27.812 1.843 - 1.856: 98.3326% ( 118) 00:12:27.812 1.856 - 1.869: 98.8984% ( 95) 00:12:27.812 1.869 - 1.882: 99.1901% ( 49) 00:12:27.812 1.882 - 1.894: 99.2795% ( 15) 00:12:27.812 1.894 - 1.907: 99.3033% ( 4) 00:12:27.812 1.907 - 1.920: 99.3211% ( 3) 00:12:27.812 1.920 - 1.933: 99.3331% ( 2) 00:12:27.812 1.946 - 1.958: 99.3450% ( 2) 00:12:27.812 1.971 - 1.984: 99.3509% ( 1) 00:12:27.812 2.086 - 2.099: 99.3569% ( 1) 00:12:27.812 2.099 - 2.112: 99.3628% ( 1) 00:12:27.812 2.163 - 2.176: 99.3688% ( 1) 00:12:27.812 2.304 - 2.317: 99.3747% ( 1) 00:12:27.812 3.661 - 3.686: 99.3807% ( 1) 00:12:27.812 3.917 - 3.942: 99.3926% ( 2) 00:12:27.812 3.942 - 3.968: 99.3986% ( 1) 00:12:27.812 3.994 - 4.019: 99.4045% ( 1) 00:12:27.812 4.070 - 4.096: 99.4105% ( 1) 00:12:27.812 4.173 - 4.198: 99.4164% ( 1) 00:12:27.812 4.250 - 4.275: 99.4224% ( 1) 00:12:27.812 4.506 - 4.531: 99.4283% ( 1) 00:12:27.812 4.557 - 4.582: 99.4343% ( 1) 00:12:27.812 4.608 - 4.634: 99.4402% ( 1) 00:12:27.812 4.710 - 4.736: 99.4462% ( 1) 00:12:27.812 4.736 - 4.762: 99.4522% ( 1) 00:12:27.812 4.762 - 4.787: 99.4581% ( 1) 00:12:27.812 4.787 - 4.813: 99.4641% ( 1) 00:12:27.812 4.915 - 4.941: 99.4700% ( 1) 00:12:27.812 5.043 - 5.069: 99.4760% ( 1) 00:12:27.812 5.094 - 5.120: 99.4819% ( 1) 00:12:27.812 5.120 - 5.146: 99.4879% ( 1) 00:12:27.812 5.171 - 5.197: 99.4938% ( 1) 00:12:27.812 5.197 - 5.222: 99.4998% ( 1) 00:12:27.812 5.709 - 5.734: 99.5057% ( 1) 00:12:27.812 6.016 - 6.042: 99.5117% ( 1) 00:12:27.812 6.758 - 6.810: 99.5177% ( 1) 00:12:27.812 9.984 - 10.035: 99.5236% ( 1) 00:12:27.812 10.394 - 10.445: 99.5296% ( 1) 00:12:27.812 14.029 - 14.131: 99.5355% ( 1) 00:12:27.812 3984.589 - 4010.803: 99.9940% ( 77) 00:12:27.812 4980.736 - 5006.950: 100.0000% ( 1) 00:12:27.812 00:12:27.812 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:27.812 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:27.812 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:27.812 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:27.812 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.072 [ 00:12:28.072 { 00:12:28.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.072 "subtype": "Discovery", 00:12:28.072 "listen_addresses": [], 00:12:28.072 "allow_any_host": true, 00:12:28.072 "hosts": [] 00:12:28.072 }, 00:12:28.072 { 00:12:28.072 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.072 "subtype": "NVMe", 00:12:28.072 "listen_addresses": [ 00:12:28.072 { 00:12:28.072 "trtype": "VFIOUSER", 00:12:28.072 "adrfam": "IPv4", 00:12:28.072 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.072 "trsvcid": "0" 00:12:28.072 } 00:12:28.072 ], 00:12:28.072 "allow_any_host": true, 00:12:28.072 "hosts": [], 00:12:28.072 "serial_number": "SPDK1", 00:12:28.072 "model_number": "SPDK bdev Controller", 00:12:28.072 "max_namespaces": 32, 00:12:28.072 "min_cntlid": 1, 00:12:28.072 "max_cntlid": 65519, 00:12:28.072 "namespaces": [ 00:12:28.072 { 00:12:28.072 "nsid": 1, 00:12:28.072 "bdev_name": "Malloc1", 00:12:28.072 "name": "Malloc1", 00:12:28.072 "nguid": "90665097C32C43378C85333A355BB910", 00:12:28.072 "uuid": "90665097-c32c-4337-8c85-333a355bb910" 00:12:28.072 }, 00:12:28.072 { 00:12:28.072 "nsid": 2, 00:12:28.072 "bdev_name": "Malloc3", 00:12:28.072 "name": "Malloc3", 00:12:28.072 "nguid": "2ED90B2063034D9B85B94C1105C5C24E", 00:12:28.072 "uuid": "2ed90b20-6303-4d9b-85b9-4c1105c5c24e" 00:12:28.072 } 00:12:28.072 ] 00:12:28.072 }, 00:12:28.072 { 00:12:28.072 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.072 "subtype": "NVMe", 00:12:28.072 "listen_addresses": [ 00:12:28.072 { 00:12:28.072 "trtype": "VFIOUSER", 00:12:28.072 "adrfam": "IPv4", 00:12:28.072 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.072 "trsvcid": "0" 00:12:28.072 } 00:12:28.072 ], 00:12:28.072 "allow_any_host": true, 00:12:28.072 "hosts": [], 00:12:28.072 "serial_number": "SPDK2", 00:12:28.072 "model_number": "SPDK bdev Controller", 00:12:28.072 "max_namespaces": 32, 00:12:28.072 "min_cntlid": 1, 00:12:28.072 "max_cntlid": 65519, 00:12:28.072 "namespaces": [ 00:12:28.072 { 00:12:28.072 "nsid": 1, 00:12:28.072 "bdev_name": "Malloc2", 00:12:28.072 "name": "Malloc2", 00:12:28.072 "nguid": "F8A259BE5C9246F2A6FE8AF2B07FD890", 00:12:28.072 "uuid": "f8a259be-5c92-46f2-a6fe-8af2b07fd890" 00:12:28.072 } 00:12:28.072 ] 00:12:28.072 } 00:12:28.072 ] 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3515068 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:28.072 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:28.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.332 [2024-05-14 23:54:28.714607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:28.332 Malloc4 00:12:28.332 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:28.332 [2024-05-14 23:54:28.886833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:28.332 23:54:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.332 Asynchronous Event Request test 00:12:28.332 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:28.332 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:28.332 Registering asynchronous event callbacks... 00:12:28.332 Starting namespace attribute notice tests for all controllers... 00:12:28.332 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:28.332 aer_cb - Changed Namespace 00:12:28.332 Cleaning up... 00:12:28.591 [ 00:12:28.591 { 00:12:28.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.591 "subtype": "Discovery", 00:12:28.591 "listen_addresses": [], 00:12:28.591 "allow_any_host": true, 00:12:28.591 "hosts": [] 00:12:28.591 }, 00:12:28.591 { 00:12:28.591 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.591 "subtype": "NVMe", 00:12:28.591 "listen_addresses": [ 00:12:28.591 { 00:12:28.591 "trtype": "VFIOUSER", 00:12:28.591 "adrfam": "IPv4", 00:12:28.591 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.591 "trsvcid": "0" 00:12:28.591 } 00:12:28.591 ], 00:12:28.591 "allow_any_host": true, 00:12:28.591 "hosts": [], 00:12:28.591 "serial_number": "SPDK1", 00:12:28.591 "model_number": "SPDK bdev Controller", 00:12:28.592 "max_namespaces": 32, 00:12:28.592 "min_cntlid": 1, 00:12:28.592 "max_cntlid": 65519, 00:12:28.592 "namespaces": [ 00:12:28.592 { 00:12:28.592 "nsid": 1, 00:12:28.592 "bdev_name": "Malloc1", 00:12:28.592 "name": "Malloc1", 00:12:28.592 "nguid": "90665097C32C43378C85333A355BB910", 00:12:28.592 "uuid": "90665097-c32c-4337-8c85-333a355bb910" 00:12:28.592 }, 00:12:28.592 { 00:12:28.592 "nsid": 2, 00:12:28.592 "bdev_name": "Malloc3", 00:12:28.592 "name": "Malloc3", 00:12:28.592 "nguid": "2ED90B2063034D9B85B94C1105C5C24E", 00:12:28.592 "uuid": "2ed90b20-6303-4d9b-85b9-4c1105c5c24e" 00:12:28.592 } 00:12:28.592 ] 00:12:28.592 }, 00:12:28.592 { 00:12:28.592 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.592 "subtype": "NVMe", 00:12:28.592 "listen_addresses": [ 00:12:28.592 { 00:12:28.592 "trtype": "VFIOUSER", 00:12:28.592 "adrfam": "IPv4", 00:12:28.592 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.592 "trsvcid": "0" 00:12:28.592 } 00:12:28.592 ], 00:12:28.592 "allow_any_host": true, 00:12:28.592 "hosts": [], 00:12:28.592 "serial_number": "SPDK2", 00:12:28.592 "model_number": "SPDK bdev Controller", 00:12:28.592 "max_namespaces": 32, 00:12:28.592 "min_cntlid": 1, 00:12:28.592 "max_cntlid": 65519, 00:12:28.592 "namespaces": [ 00:12:28.592 { 00:12:28.592 "nsid": 1, 00:12:28.592 "bdev_name": "Malloc2", 00:12:28.592 "name": "Malloc2", 00:12:28.592 "nguid": "F8A259BE5C9246F2A6FE8AF2B07FD890", 00:12:28.592 "uuid": "f8a259be-5c92-46f2-a6fe-8af2b07fd890" 00:12:28.592 }, 00:12:28.592 { 00:12:28.592 "nsid": 2, 00:12:28.592 "bdev_name": "Malloc4", 00:12:28.592 "name": "Malloc4", 00:12:28.592 "nguid": "8C52A1D5D4D64573B50E8F912E0CE036", 00:12:28.592 "uuid": "8c52a1d5-d4d6-4573-b50e-8f912e0ce036" 00:12:28.592 } 00:12:28.592 ] 00:12:28.592 } 00:12:28.592 ] 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3515068 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3506458 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3506458 ']' 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3506458 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3506458 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3506458' 00:12:28.592 killing process with pid 3506458 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3506458 00:12:28.592 [2024-05-14 23:54:29.154273] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:28.592 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3506458 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3515294 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3515294' 00:12:28.852 Process pid: 3515294 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3515294 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3515294 ']' 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.852 23:54:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:29.112 [2024-05-14 23:54:29.488843] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:29.112 [2024-05-14 23:54:29.489768] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:29.112 [2024-05-14 23:54:29.489806] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.112 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.112 [2024-05-14 23:54:29.558662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.112 [2024-05-14 23:54:29.623756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.112 [2024-05-14 23:54:29.623797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.112 [2024-05-14 23:54:29.623806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.112 [2024-05-14 23:54:29.623814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.112 [2024-05-14 23:54:29.623821] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.112 [2024-05-14 23:54:29.623872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.112 [2024-05-14 23:54:29.623969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.112 [2024-05-14 23:54:29.624031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.112 [2024-05-14 23:54:29.624033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.112 [2024-05-14 23:54:29.700326] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:29.112 [2024-05-14 23:54:29.700403] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:29.112 [2024-05-14 23:54:29.700557] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:29.112 [2024-05-14 23:54:29.700872] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:29.112 [2024-05-14 23:54:29.701060] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:30.049 23:54:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.049 23:54:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:30.049 23:54:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:30.988 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:31.247 Malloc1 00:12:31.247 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:31.507 23:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:31.507 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.767 [2024-05-14 23:54:32.192456] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:31.767 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.767 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.767 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.026 Malloc2 00:12:32.026 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:32.026 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.285 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3515294 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3515294 ']' 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3515294 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:32.545 23:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515294 00:12:32.545 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:32.545 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:32.545 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515294' 00:12:32.545 killing process with pid 3515294 00:12:32.545 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3515294 00:12:32.545 [2024-05-14 23:54:33.012692] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:32.545 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3515294 00:12:32.804 23:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:32.804 23:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:32.804 00:12:32.804 real 0m51.827s 00:12:32.804 user 3m23.761s 00:12:32.804 sys 0m4.790s 00:12:32.804 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:32.804 23:54:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:32.804 ************************************ 00:12:32.804 END TEST nvmf_vfio_user 00:12:32.804 ************************************ 00:12:32.804 23:54:33 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:32.804 23:54:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:32.804 23:54:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.804 23:54:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.804 ************************************ 00:12:32.804 START TEST nvmf_vfio_user_nvme_compliance 00:12:32.804 ************************************ 00:12:32.804 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:33.063 * Looking for test storage... 00:12:33.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.063 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3516101 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3516101' 00:12:33.064 Process pid: 3516101 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3516101 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3516101 ']' 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:33.064 23:54:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:33.064 [2024-05-14 23:54:33.526023] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:12:33.064 [2024-05-14 23:54:33.526078] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.064 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.064 [2024-05-14 23:54:33.596147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.323 [2024-05-14 23:54:33.673519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.323 [2024-05-14 23:54:33.673555] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.323 [2024-05-14 23:54:33.673565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.323 [2024-05-14 23:54:33.673573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.323 [2024-05-14 23:54:33.673580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.323 [2024-05-14 23:54:33.673623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.323 [2024-05-14 23:54:33.673717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.323 [2024-05-14 23:54:33.673719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.891 23:54:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.891 23:54:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:12:33.891 23:54:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.829 malloc0 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.829 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.830 [2024-05-14 23:54:35.407402] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.830 23:54:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:35.089 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.089 00:12:35.089 00:12:35.089 CUnit - A unit testing framework for C - Version 2.1-3 00:12:35.089 http://cunit.sourceforge.net/ 00:12:35.089 00:12:35.089 00:12:35.089 Suite: nvme_compliance 00:12:35.089 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-14 23:54:35.581372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.089 [2024-05-14 23:54:35.582710] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:35.089 [2024-05-14 23:54:35.582725] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:35.089 [2024-05-14 23:54:35.582733] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:35.089 [2024-05-14 23:54:35.584410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.089 passed 00:12:35.089 Test: admin_identify_ctrlr_verify_fused ...[2024-05-14 23:54:35.661942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.089 [2024-05-14 23:54:35.664962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.348 passed 00:12:35.348 Test: admin_identify_ns ...[2024-05-14 23:54:35.743670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.348 [2024-05-14 23:54:35.806203] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:35.348 [2024-05-14 23:54:35.814205] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:35.348 [2024-05-14 23:54:35.835290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.349 passed 00:12:35.349 Test: admin_get_features_mandatory_features ...[2024-05-14 23:54:35.906532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.349 [2024-05-14 23:54:35.910556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.349 passed 00:12:35.608 Test: admin_get_features_optional_features ...[2024-05-14 23:54:35.985038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.608 [2024-05-14 23:54:35.988053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.608 passed 00:12:35.608 Test: admin_set_features_number_of_queues ...[2024-05-14 23:54:36.064480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.608 [2024-05-14 23:54:36.169287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.608 passed 00:12:35.867 Test: admin_get_log_page_mandatory_logs ...[2024-05-14 23:54:36.241720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.867 [2024-05-14 23:54:36.244745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.867 passed 00:12:35.867 Test: admin_get_log_page_with_lpo ...[2024-05-14 23:54:36.320237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.867 [2024-05-14 23:54:36.389203] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:35.867 [2024-05-14 23:54:36.402274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.867 passed 00:12:36.128 Test: fabric_property_get ...[2024-05-14 23:54:36.476468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.128 [2024-05-14 23:54:36.477686] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:36.128 [2024-05-14 23:54:36.479481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.128 passed 00:12:36.128 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-14 23:54:36.555991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.128 [2024-05-14 23:54:36.557215] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:36.128 [2024-05-14 23:54:36.559013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.128 passed 00:12:36.128 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-14 23:54:36.633253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.128 [2024-05-14 23:54:36.718209] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:36.387 [2024-05-14 23:54:36.734197] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:36.387 [2024-05-14 23:54:36.739292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.387 passed 00:12:36.388 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-14 23:54:36.814928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.388 [2024-05-14 23:54:36.816157] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:36.388 [2024-05-14 23:54:36.817945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.388 passed 00:12:36.388 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-14 23:54:36.892392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.388 [2024-05-14 23:54:36.968208] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:36.647 [2024-05-14 23:54:36.992199] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:36.647 [2024-05-14 23:54:36.997276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.647 passed 00:12:36.647 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-14 23:54:37.073059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.647 [2024-05-14 23:54:37.074301] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:36.647 [2024-05-14 23:54:37.074328] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:36.647 [2024-05-14 23:54:37.076081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.647 passed 00:12:36.647 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-14 23:54:37.150184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.908 [2024-05-14 23:54:37.244197] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:36.908 [2024-05-14 23:54:37.252202] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:36.908 [2024-05-14 23:54:37.260199] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:36.908 [2024-05-14 23:54:37.268200] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:36.908 [2024-05-14 23:54:37.297268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.908 passed 00:12:36.908 Test: admin_create_io_sq_verify_pc ...[2024-05-14 23:54:37.370564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.908 [2024-05-14 23:54:37.387207] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:36.908 [2024-05-14 23:54:37.404764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.908 passed 00:12:36.908 Test: admin_create_io_qp_max_qps ...[2024-05-14 23:54:37.481258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.287 [2024-05-14 23:54:38.585202] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:38.545 [2024-05-14 23:54:38.963587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.545 passed 00:12:38.545 Test: admin_create_io_sq_shared_cq ...[2024-05-14 23:54:39.038177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.804 [2024-05-14 23:54:39.169196] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:38.804 [2024-05-14 23:54:39.205257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.804 passed 00:12:38.804 00:12:38.804 Run Summary: Type Total Ran Passed Failed Inactive 00:12:38.804 suites 1 1 n/a 0 0 00:12:38.804 tests 18 18 18 0 0 00:12:38.804 asserts 360 360 360 0 n/a 00:12:38.804 00:12:38.804 Elapsed time = 1.490 seconds 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3516101 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3516101 ']' 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3516101 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3516101 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3516101' 00:12:38.804 killing process with pid 3516101 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3516101 00:12:38.804 [2024-05-14 23:54:39.304855] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:38.804 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3516101 00:12:39.062 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:39.063 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:39.063 00:12:39.063 real 0m6.191s 00:12:39.063 user 0m17.405s 00:12:39.063 sys 0m0.711s 00:12:39.063 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:39.063 23:54:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:39.063 ************************************ 00:12:39.063 END TEST nvmf_vfio_user_nvme_compliance 00:12:39.063 ************************************ 00:12:39.063 23:54:39 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:39.063 23:54:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:39.063 23:54:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:39.063 23:54:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.063 ************************************ 00:12:39.063 START TEST nvmf_vfio_user_fuzz 00:12:39.063 ************************************ 00:12:39.063 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:39.321 * Looking for test storage... 00:12:39.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.321 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3517227 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3517227' 00:12:39.322 Process pid: 3517227 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3517227 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3517227 ']' 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.322 23:54:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:40.261 23:54:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.261 23:54:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:12:40.261 23:54:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 malloc0 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:41.239 23:54:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:13.324 Fuzzing completed. Shutting down the fuzz application 00:13:13.324 00:13:13.324 Dumping successful admin opcodes: 00:13:13.324 8, 9, 10, 24, 00:13:13.325 Dumping successful io opcodes: 00:13:13.325 0, 00:13:13.325 NS: 0x200003a1ef00 I/O qp, Total commands completed: 877359, total successful commands: 3408, random_seed: 4164061184 00:13:13.325 NS: 0x200003a1ef00 admin qp, Total commands completed: 213421, total successful commands: 1717, random_seed: 1896609408 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3517227 ']' 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3517227' 00:13:13.325 killing process with pid 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3517227 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:13.325 00:13:13.325 real 0m33.245s 00:13:13.325 user 0m30.890s 00:13:13.325 sys 0m30.311s 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.325 23:55:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.325 ************************************ 00:13:13.325 END TEST nvmf_vfio_user_fuzz 00:13:13.325 ************************************ 00:13:13.325 23:55:12 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:13.325 23:55:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:13.325 23:55:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.325 23:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.325 ************************************ 00:13:13.325 START TEST nvmf_host_management 00:13:13.325 ************************************ 00:13:13.325 23:55:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:13.325 * Looking for test storage... 00:13:13.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.325 23:55:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:19.894 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:19.894 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:19.894 Found net devices under 0000:af:00.0: cvl_0_0 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:19.894 Found net devices under 0000:af:00.1: cvl_0_1 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:13:19.894 00:13:19.894 --- 10.0.0.2 ping statistics --- 00:13:19.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.894 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:13:19.894 00:13:19.894 --- 10.0.0.1 ping statistics --- 00:13:19.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.894 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.894 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3526053 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3526053 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3526053 ']' 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:19.895 23:55:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.895 [2024-05-14 23:55:19.688630] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:19.895 [2024-05-14 23:55:19.688680] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.895 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.895 [2024-05-14 23:55:19.764465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.895 [2024-05-14 23:55:19.841373] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.895 [2024-05-14 23:55:19.841409] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.895 [2024-05-14 23:55:19.841419] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.895 [2024-05-14 23:55:19.841427] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.895 [2024-05-14 23:55:19.841435] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.895 [2024-05-14 23:55:19.841539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.895 [2024-05-14 23:55:19.841622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.895 [2024-05-14 23:55:19.841733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.895 [2024-05-14 23:55:19.841734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:20.154 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:20.154 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:20.154 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.154 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 [2024-05-14 23:55:20.566087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 Malloc0 00:13:20.155 [2024-05-14 23:55:20.632484] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:20.155 [2024-05-14 23:55:20.632743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3526303 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3526303 /var/tmp/bdevperf.sock 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3526303 ']' 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:20.155 { 00:13:20.155 "params": { 00:13:20.155 "name": "Nvme$subsystem", 00:13:20.155 "trtype": "$TEST_TRANSPORT", 00:13:20.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.155 "adrfam": "ipv4", 00:13:20.155 "trsvcid": "$NVMF_PORT", 00:13:20.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.155 "hdgst": ${hdgst:-false}, 00:13:20.155 "ddgst": ${ddgst:-false} 00:13:20.155 }, 00:13:20.155 "method": "bdev_nvme_attach_controller" 00:13:20.155 } 00:13:20.155 EOF 00:13:20.155 )") 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:20.155 23:55:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:20.155 "params": { 00:13:20.155 "name": "Nvme0", 00:13:20.155 "trtype": "tcp", 00:13:20.155 "traddr": "10.0.0.2", 00:13:20.155 "adrfam": "ipv4", 00:13:20.155 "trsvcid": "4420", 00:13:20.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:20.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:20.155 "hdgst": false, 00:13:20.155 "ddgst": false 00:13:20.155 }, 00:13:20.155 "method": "bdev_nvme_attach_controller" 00:13:20.155 }' 00:13:20.155 [2024-05-14 23:55:20.736862] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:20.155 [2024-05-14 23:55:20.736911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526303 ] 00:13:20.414 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.414 [2024-05-14 23:55:20.807828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.414 [2024-05-14 23:55:20.876358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.672 Running I/O for 10 seconds... 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.242 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.242 [2024-05-14 23:55:21.611975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dd230 is same with the state(5) to be set 00:13:21.242 [2024-05-14 23:55:21.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.242 [2024-05-14 23:55:21.612702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.242 [2024-05-14 23:55:21.612712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.612985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.612996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.243 [2024-05-14 23:55:21.613679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.243 [2024-05-14 23:55:21.613690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.613986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:21.244 [2024-05-14 23:55:21.613997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.244 [2024-05-14 23:55:21.614062] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfcdad0 was disconnected and freed. reset controller. 00:13:21.244 [2024-05-14 23:55:21.614938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:21.244 task offset: 70912 on job bdev=Nvme0n1 fails 00:13:21.244 00:13:21.244 Latency(us) 00:13:21.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.244 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:21.244 Job: Nvme0n1 ended in about 0.41 seconds with error 00:13:21.244 Verification LBA range: start 0x0 length 0x400 00:13:21.244 Nvme0n1 : 0.41 1238.73 77.42 154.84 0.00 44875.89 2359.30 53057.95 00:13:21.244 =================================================================================================================== 00:13:21.244 Total : 1238.73 77.42 154.84 0.00 44875.89 2359.30 53057.95 00:13:21.244 [2024-05-14 23:55:21.616490] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:21.244 [2024-05-14 23:55:21.616508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbc9f0 (9): Bad file descriptor 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.244 23:55:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:21.244 [2024-05-14 23:55:21.670086] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3526303 00:13:22.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3526303) - No such process 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:22.179 { 00:13:22.179 "params": { 00:13:22.179 "name": "Nvme$subsystem", 00:13:22.179 "trtype": "$TEST_TRANSPORT", 00:13:22.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.179 "adrfam": "ipv4", 00:13:22.179 "trsvcid": "$NVMF_PORT", 00:13:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.179 "hdgst": ${hdgst:-false}, 00:13:22.179 "ddgst": ${ddgst:-false} 00:13:22.179 }, 00:13:22.179 "method": "bdev_nvme_attach_controller" 00:13:22.179 } 00:13:22.179 EOF 00:13:22.179 )") 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:22.179 23:55:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:22.179 "params": { 00:13:22.179 "name": "Nvme0", 00:13:22.179 "trtype": "tcp", 00:13:22.179 "traddr": "10.0.0.2", 00:13:22.179 "adrfam": "ipv4", 00:13:22.179 "trsvcid": "4420", 00:13:22.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:22.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:22.179 "hdgst": false, 00:13:22.179 "ddgst": false 00:13:22.179 }, 00:13:22.179 "method": "bdev_nvme_attach_controller" 00:13:22.179 }' 00:13:22.179 [2024-05-14 23:55:22.696303] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:22.179 [2024-05-14 23:55:22.696363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526593 ] 00:13:22.179 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.179 [2024-05-14 23:55:22.767763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.438 [2024-05-14 23:55:22.834556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.697 Running I/O for 1 seconds... 00:13:23.632 00:13:23.632 Latency(us) 00:13:23.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.632 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:23.632 Verification LBA range: start 0x0 length 0x400 00:13:23.632 Nvme0n1 : 1.05 1283.06 80.19 0.00 0.00 49279.47 11114.91 53057.95 00:13:23.632 =================================================================================================================== 00:13:23.632 Total : 1283.06 80.19 0.00 0.00 49279.47 11114.91 53057.95 00:13:23.891 23:55:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:23.891 23:55:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:23.891 23:55:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:23.891 23:55:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.892 rmmod nvme_tcp 00:13:23.892 rmmod nvme_fabrics 00:13:23.892 rmmod nvme_keyring 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3526053 ']' 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3526053 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3526053 ']' 00:13:23.892 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3526053 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3526053 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3526053' 00:13:24.151 killing process with pid 3526053 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3526053 00:13:24.151 [2024-05-14 23:55:24.539992] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:24.151 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3526053 00:13:24.151 [2024-05-14 23:55:24.739056] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.410 23:55:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.380 23:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.380 23:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:26.380 00:13:26.380 real 0m13.900s 00:13:26.380 user 0m23.900s 00:13:26.380 sys 0m6.323s 00:13:26.380 23:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:26.380 23:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:26.380 ************************************ 00:13:26.380 END TEST nvmf_host_management 00:13:26.380 ************************************ 00:13:26.380 23:55:26 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:26.380 23:55:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:26.380 23:55:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.380 23:55:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.380 ************************************ 00:13:26.380 START TEST nvmf_lvol 00:13:26.380 ************************************ 00:13:26.380 23:55:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:26.640 * Looking for test storage... 00:13:26.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.640 23:55:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:33.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:33.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:33.209 Found net devices under 0000:af:00.0: cvl_0_0 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:33.209 Found net devices under 0000:af:00.1: cvl_0_1 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:33.209 00:13:33.209 --- 10.0.0.2 ping statistics --- 00:13:33.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.209 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:13:33.209 00:13:33.209 --- 10.0.0.1 ping statistics --- 00:13:33.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.209 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3530566 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3530566 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3530566 ']' 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.209 23:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:33.209 [2024-05-14 23:55:33.695207] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:33.210 [2024-05-14 23:55:33.695254] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.210 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.210 [2024-05-14 23:55:33.769079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.469 [2024-05-14 23:55:33.842215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.469 [2024-05-14 23:55:33.842252] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.469 [2024-05-14 23:55:33.842262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.469 [2024-05-14 23:55:33.842271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.469 [2024-05-14 23:55:33.842278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.469 [2024-05-14 23:55:33.842330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.469 [2024-05-14 23:55:33.842441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.469 [2024-05-14 23:55:33.842443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.036 23:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.294 [2024-05-14 23:55:34.682467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.294 23:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.553 23:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:34.553 23:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.553 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:34.553 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:34.812 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:35.071 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2d9dc4bc-eeff-46f3-8255-a6864385564b 00:13:35.071 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2d9dc4bc-eeff-46f3-8255-a6864385564b lvol 20 00:13:35.071 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=049a97ff-5594-4dda-b7bd-2b6a0ba01232 00:13:35.071 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:35.329 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 049a97ff-5594-4dda-b7bd-2b6a0ba01232 00:13:35.587 23:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:35.587 [2024-05-14 23:55:36.146595] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:35.587 [2024-05-14 23:55:36.146864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.587 23:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.845 23:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3531118 00:13:35.845 23:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:35.845 23:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:35.845 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.780 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 049a97ff-5594-4dda-b7bd-2b6a0ba01232 MY_SNAPSHOT 00:13:37.039 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=49a300c9-5526-4f71-a66e-843d64fee225 00:13:37.039 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 049a97ff-5594-4dda-b7bd-2b6a0ba01232 30 00:13:37.298 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 49a300c9-5526-4f71-a66e-843d64fee225 MY_CLONE 00:13:37.557 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=645dcd89-5a39-4bf2-b63b-22298991758d 00:13:37.557 23:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 645dcd89-5a39-4bf2-b63b-22298991758d 00:13:37.816 23:55:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3531118 00:13:47.796 Initializing NVMe Controllers 00:13:47.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:47.797 Controller IO queue size 128, less than required. 00:13:47.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:47.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:47.797 Initialization complete. Launching workers. 00:13:47.797 ======================================================== 00:13:47.797 Latency(us) 00:13:47.797 Device Information : IOPS MiB/s Average min max 00:13:47.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12189.60 47.62 10503.91 1679.74 87760.54 00:13:47.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12088.00 47.22 10591.34 3678.85 41524.06 00:13:47.797 ======================================================== 00:13:47.797 Total : 24277.60 94.83 10547.44 1679.74 87760.54 00:13:47.797 00:13:47.797 23:55:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:47.797 23:55:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 049a97ff-5594-4dda-b7bd-2b6a0ba01232 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d9dc4bc-eeff-46f3-8255-a6864385564b 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.797 rmmod nvme_tcp 00:13:47.797 rmmod nvme_fabrics 00:13:47.797 rmmod nvme_keyring 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3530566 ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3530566 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3530566 ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3530566 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3530566 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3530566' 00:13:47.797 killing process with pid 3530566 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3530566 00:13:47.797 [2024-05-14 23:55:47.396371] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3530566 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.797 23:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.175 23:55:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.175 00:13:49.175 real 0m22.791s 00:13:49.175 user 1m2.143s 00:13:49.175 sys 0m9.706s 00:13:49.175 23:55:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.175 23:55:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:49.175 ************************************ 00:13:49.175 END TEST nvmf_lvol 00:13:49.175 ************************************ 00:13:49.175 23:55:49 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:49.175 23:55:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:49.175 23:55:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.175 23:55:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.434 ************************************ 00:13:49.434 START TEST nvmf_lvs_grow 00:13:49.434 ************************************ 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:49.434 * Looking for test storage... 00:13:49.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.434 23:55:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.435 23:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:56.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:56.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:56.069 Found net devices under 0000:af:00.0: cvl_0_0 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:56.069 Found net devices under 0000:af:00.1: cvl_0_1 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.069 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.070 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.070 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.070 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:13:56.328 00:13:56.328 --- 10.0.0.2 ping statistics --- 00:13:56.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.328 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:56.328 00:13:56.328 --- 10.0.0.1 ping statistics --- 00:13:56.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.328 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:56.328 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3536678 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3536678 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3536678 ']' 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:56.587 23:55:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.587 [2024-05-14 23:55:56.972841] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:56.587 [2024-05-14 23:55:56.972891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.587 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.587 [2024-05-14 23:55:57.047945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.587 [2024-05-14 23:55:57.115451] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.587 [2024-05-14 23:55:57.115491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.587 [2024-05-14 23:55:57.115500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.587 [2024-05-14 23:55:57.115509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.587 [2024-05-14 23:55:57.115516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.587 [2024-05-14 23:55:57.115545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:57.524 [2024-05-14 23:55:57.955621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.524 23:55:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.524 ************************************ 00:13:57.524 START TEST lvs_grow_clean 00:13:57.524 ************************************ 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.524 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.783 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:57.783 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:58.041 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a lvol 150 00:13:58.300 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d21c1b6-36ca-45a0-8188-5d98b25f3821 00:13:58.300 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:58.300 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:58.300 [2024-05-14 23:55:58.888841] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:58.300 [2024-05-14 23:55:58.888893] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:58.558 true 00:13:58.558 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:13:58.559 23:55:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:58.559 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:58.559 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:58.817 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d21c1b6-36ca-45a0-8188-5d98b25f3821 00:13:58.817 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:59.075 [2024-05-14 23:55:59.538586] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:59.075 [2024-05-14 23:55:59.538844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.075 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3537253 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3537253 /var/tmp/bdevperf.sock 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3537253 ']' 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:59.334 23:55:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:59.334 [2024-05-14 23:55:59.745944] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:13:59.334 [2024-05-14 23:55:59.745992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537253 ] 00:13:59.335 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.335 [2024-05-14 23:55:59.814521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.335 [2024-05-14 23:55:59.889090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.271 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.271 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:00.271 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:00.271 Nvme0n1 00:14:00.271 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:00.539 [ 00:14:00.539 { 00:14:00.539 "name": "Nvme0n1", 00:14:00.539 "aliases": [ 00:14:00.540 "8d21c1b6-36ca-45a0-8188-5d98b25f3821" 00:14:00.540 ], 00:14:00.540 "product_name": "NVMe disk", 00:14:00.540 "block_size": 4096, 00:14:00.540 "num_blocks": 38912, 00:14:00.540 "uuid": "8d21c1b6-36ca-45a0-8188-5d98b25f3821", 00:14:00.540 "assigned_rate_limits": { 00:14:00.540 "rw_ios_per_sec": 0, 00:14:00.540 "rw_mbytes_per_sec": 0, 00:14:00.540 "r_mbytes_per_sec": 0, 00:14:00.540 "w_mbytes_per_sec": 0 00:14:00.540 }, 00:14:00.540 "claimed": false, 00:14:00.540 "zoned": false, 00:14:00.540 "supported_io_types": { 00:14:00.540 "read": true, 00:14:00.540 "write": true, 00:14:00.540 "unmap": true, 00:14:00.540 "write_zeroes": true, 00:14:00.540 "flush": true, 00:14:00.540 "reset": true, 00:14:00.540 "compare": true, 00:14:00.540 "compare_and_write": true, 00:14:00.540 "abort": true, 00:14:00.540 "nvme_admin": true, 00:14:00.540 "nvme_io": true 00:14:00.540 }, 00:14:00.540 "memory_domains": [ 00:14:00.540 { 00:14:00.540 "dma_device_id": "system", 00:14:00.540 "dma_device_type": 1 00:14:00.540 } 00:14:00.540 ], 00:14:00.540 "driver_specific": { 00:14:00.540 "nvme": [ 00:14:00.540 { 00:14:00.540 "trid": { 00:14:00.540 "trtype": "TCP", 00:14:00.540 "adrfam": "IPv4", 00:14:00.540 "traddr": "10.0.0.2", 00:14:00.540 "trsvcid": "4420", 00:14:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:00.540 }, 00:14:00.540 "ctrlr_data": { 00:14:00.540 "cntlid": 1, 00:14:00.540 "vendor_id": "0x8086", 00:14:00.540 "model_number": "SPDK bdev Controller", 00:14:00.540 "serial_number": "SPDK0", 00:14:00.540 "firmware_revision": "24.05", 00:14:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:00.540 "oacs": { 00:14:00.540 "security": 0, 00:14:00.540 "format": 0, 00:14:00.540 "firmware": 0, 00:14:00.540 "ns_manage": 0 00:14:00.540 }, 00:14:00.540 "multi_ctrlr": true, 00:14:00.540 "ana_reporting": false 00:14:00.540 }, 00:14:00.540 "vs": { 00:14:00.540 "nvme_version": "1.3" 00:14:00.540 }, 00:14:00.540 "ns_data": { 00:14:00.540 "id": 1, 00:14:00.540 "can_share": true 00:14:00.540 } 00:14:00.540 } 00:14:00.540 ], 00:14:00.540 "mp_policy": "active_passive" 00:14:00.540 } 00:14:00.540 } 00:14:00.540 ] 00:14:00.540 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3537515 00:14:00.540 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:00.540 23:56:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:00.540 Running I/O for 10 seconds... 00:14:01.478 Latency(us) 00:14:01.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.478 Nvme0n1 : 1.00 23714.00 92.63 0.00 0.00 0.00 0.00 0.00 00:14:01.478 =================================================================================================================== 00:14:01.478 Total : 23714.00 92.63 0.00 0.00 0.00 0.00 0.00 00:14:01.478 00:14:02.414 23:56:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:02.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.673 Nvme0n1 : 2.00 24017.00 93.82 0.00 0.00 0.00 0.00 0.00 00:14:02.673 =================================================================================================================== 00:14:02.673 Total : 24017.00 93.82 0.00 0.00 0.00 0.00 0.00 00:14:02.673 00:14:02.673 true 00:14:02.673 23:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:02.673 23:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:02.932 23:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:02.932 23:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:02.932 23:56:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3537515 00:14:03.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.501 Nvme0n1 : 3.00 24083.33 94.08 0.00 0.00 0.00 0.00 0.00 00:14:03.501 =================================================================================================================== 00:14:03.501 Total : 24083.33 94.08 0.00 0.00 0.00 0.00 0.00 00:14:03.501 00:14:04.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.875 Nvme0n1 : 4.00 24178.75 94.45 0.00 0.00 0.00 0.00 0.00 00:14:04.875 =================================================================================================================== 00:14:04.875 Total : 24178.75 94.45 0.00 0.00 0.00 0.00 0.00 00:14:04.875 00:14:05.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.810 Nvme0n1 : 5.00 24232.60 94.66 0.00 0.00 0.00 0.00 0.00 00:14:05.810 =================================================================================================================== 00:14:05.810 Total : 24232.60 94.66 0.00 0.00 0.00 0.00 0.00 00:14:05.810 00:14:06.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.745 Nvme0n1 : 6.00 24236.83 94.68 0.00 0.00 0.00 0.00 0.00 00:14:06.745 =================================================================================================================== 00:14:06.745 Total : 24236.83 94.68 0.00 0.00 0.00 0.00 0.00 00:14:06.745 00:14:07.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.681 Nvme0n1 : 7.00 24260.00 94.77 0.00 0.00 0.00 0.00 0.00 00:14:07.681 =================================================================================================================== 00:14:07.681 Total : 24260.00 94.77 0.00 0.00 0.00 0.00 0.00 00:14:07.681 00:14:08.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.676 Nvme0n1 : 8.00 24249.25 94.72 0.00 0.00 0.00 0.00 0.00 00:14:08.676 =================================================================================================================== 00:14:08.676 Total : 24249.25 94.72 0.00 0.00 0.00 0.00 0.00 00:14:08.676 00:14:09.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.612 Nvme0n1 : 9.00 24264.11 94.78 0.00 0.00 0.00 0.00 0.00 00:14:09.612 =================================================================================================================== 00:14:09.612 Total : 24264.11 94.78 0.00 0.00 0.00 0.00 0.00 00:14:09.612 00:14:10.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.548 Nvme0n1 : 10.00 24225.00 94.63 0.00 0.00 0.00 0.00 0.00 00:14:10.548 =================================================================================================================== 00:14:10.548 Total : 24225.00 94.63 0.00 0.00 0.00 0.00 0.00 00:14:10.548 00:14:10.548 00:14:10.548 Latency(us) 00:14:10.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.548 Nvme0n1 : 10.01 24224.16 94.63 0.00 0.00 5280.05 3696.23 19713.23 00:14:10.548 =================================================================================================================== 00:14:10.548 Total : 24224.16 94.63 0.00 0.00 5280.05 3696.23 19713.23 00:14:10.548 0 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3537253 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3537253 ']' 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3537253 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3537253 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3537253' 00:14:10.548 killing process with pid 3537253 00:14:10.548 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3537253 00:14:10.548 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.548 00:14:10.548 Latency(us) 00:14:10.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.548 =================================================================================================================== 00:14:10.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.549 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3537253 00:14:10.807 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.065 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:11.324 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:11.324 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:11.324 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:11.324 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:11.324 23:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:11.584 [2024-05-14 23:56:12.052749] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:11.584 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:11.843 request: 00:14:11.843 { 00:14:11.843 "uuid": "96e91898-c7d8-4657-a10e-a3c7f6304c0a", 00:14:11.843 "method": "bdev_lvol_get_lvstores", 00:14:11.843 "req_id": 1 00:14:11.843 } 00:14:11.843 Got JSON-RPC error response 00:14:11.843 response: 00:14:11.843 { 00:14:11.843 "code": -19, 00:14:11.843 "message": "No such device" 00:14:11.843 } 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:11.843 aio_bdev 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d21c1b6-36ca-45a0-8188-5d98b25f3821 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=8d21c1b6-36ca-45a0-8188-5d98b25f3821 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:11.843 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:12.101 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d21c1b6-36ca-45a0-8188-5d98b25f3821 -t 2000 00:14:12.360 [ 00:14:12.360 { 00:14:12.360 "name": "8d21c1b6-36ca-45a0-8188-5d98b25f3821", 00:14:12.360 "aliases": [ 00:14:12.360 "lvs/lvol" 00:14:12.360 ], 00:14:12.360 "product_name": "Logical Volume", 00:14:12.360 "block_size": 4096, 00:14:12.360 "num_blocks": 38912, 00:14:12.360 "uuid": "8d21c1b6-36ca-45a0-8188-5d98b25f3821", 00:14:12.360 "assigned_rate_limits": { 00:14:12.360 "rw_ios_per_sec": 0, 00:14:12.360 "rw_mbytes_per_sec": 0, 00:14:12.360 "r_mbytes_per_sec": 0, 00:14:12.360 "w_mbytes_per_sec": 0 00:14:12.360 }, 00:14:12.360 "claimed": false, 00:14:12.360 "zoned": false, 00:14:12.360 "supported_io_types": { 00:14:12.360 "read": true, 00:14:12.360 "write": true, 00:14:12.360 "unmap": true, 00:14:12.360 "write_zeroes": true, 00:14:12.360 "flush": false, 00:14:12.360 "reset": true, 00:14:12.360 "compare": false, 00:14:12.360 "compare_and_write": false, 00:14:12.360 "abort": false, 00:14:12.360 "nvme_admin": false, 00:14:12.360 "nvme_io": false 00:14:12.361 }, 00:14:12.361 "driver_specific": { 00:14:12.361 "lvol": { 00:14:12.361 "lvol_store_uuid": "96e91898-c7d8-4657-a10e-a3c7f6304c0a", 00:14:12.361 "base_bdev": "aio_bdev", 00:14:12.361 "thin_provision": false, 00:14:12.361 "num_allocated_clusters": 38, 00:14:12.361 "snapshot": false, 00:14:12.361 "clone": false, 00:14:12.361 "esnap_clone": false 00:14:12.361 } 00:14:12.361 } 00:14:12.361 } 00:14:12.361 ] 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:12.361 23:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:12.619 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:12.619 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d21c1b6-36ca-45a0-8188-5d98b25f3821 00:14:12.878 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96e91898-c7d8-4657-a10e-a3c7f6304c0a 00:14:12.878 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.137 00:14:13.137 real 0m15.612s 00:14:13.137 user 0m14.700s 00:14:13.137 sys 0m2.009s 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:13.137 ************************************ 00:14:13.137 END TEST lvs_grow_clean 00:14:13.137 ************************************ 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:13.137 ************************************ 00:14:13.137 START TEST lvs_grow_dirty 00:14:13.137 ************************************ 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:13.137 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:13.395 23:56:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:13.654 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:13.654 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:13.654 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 428f803d-72e5-438c-9599-c1c4551f2b1f lvol 150 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.912 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:14.171 [2024-05-14 23:56:14.596825] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:14.171 [2024-05-14 23:56:14.596875] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:14.171 true 00:14:14.171 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:14.171 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:14.430 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:14.430 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.430 23:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:14.689 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.689 [2024-05-14 23:56:15.258819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.689 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3539979 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3539979 /var/tmp/bdevperf.sock 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3539979 ']' 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.948 23:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:14.948 [2024-05-14 23:56:15.480573] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:14.948 [2024-05-14 23:56:15.480622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539979 ] 00:14:14.948 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.207 [2024-05-14 23:56:15.549725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.207 [2024-05-14 23:56:15.624212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.775 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.775 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:15.775 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:16.034 Nvme0n1 00:14:16.034 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.294 [ 00:14:16.294 { 00:14:16.294 "name": "Nvme0n1", 00:14:16.294 "aliases": [ 00:14:16.294 "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94" 00:14:16.294 ], 00:14:16.294 "product_name": "NVMe disk", 00:14:16.294 "block_size": 4096, 00:14:16.294 "num_blocks": 38912, 00:14:16.294 "uuid": "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94", 00:14:16.294 "assigned_rate_limits": { 00:14:16.294 "rw_ios_per_sec": 0, 00:14:16.294 "rw_mbytes_per_sec": 0, 00:14:16.294 "r_mbytes_per_sec": 0, 00:14:16.294 "w_mbytes_per_sec": 0 00:14:16.294 }, 00:14:16.294 "claimed": false, 00:14:16.294 "zoned": false, 00:14:16.294 "supported_io_types": { 00:14:16.294 "read": true, 00:14:16.294 "write": true, 00:14:16.294 "unmap": true, 00:14:16.294 "write_zeroes": true, 00:14:16.294 "flush": true, 00:14:16.294 "reset": true, 00:14:16.294 "compare": true, 00:14:16.294 "compare_and_write": true, 00:14:16.294 "abort": true, 00:14:16.294 "nvme_admin": true, 00:14:16.294 "nvme_io": true 00:14:16.294 }, 00:14:16.294 "memory_domains": [ 00:14:16.294 { 00:14:16.294 "dma_device_id": "system", 00:14:16.294 "dma_device_type": 1 00:14:16.294 } 00:14:16.294 ], 00:14:16.294 "driver_specific": { 00:14:16.294 "nvme": [ 00:14:16.294 { 00:14:16.294 "trid": { 00:14:16.294 "trtype": "TCP", 00:14:16.294 "adrfam": "IPv4", 00:14:16.294 "traddr": "10.0.0.2", 00:14:16.294 "trsvcid": "4420", 00:14:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:16.294 }, 00:14:16.294 "ctrlr_data": { 00:14:16.294 "cntlid": 1, 00:14:16.294 "vendor_id": "0x8086", 00:14:16.294 "model_number": "SPDK bdev Controller", 00:14:16.294 "serial_number": "SPDK0", 00:14:16.294 "firmware_revision": "24.05", 00:14:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.294 "oacs": { 00:14:16.294 "security": 0, 00:14:16.294 "format": 0, 00:14:16.294 "firmware": 0, 00:14:16.294 "ns_manage": 0 00:14:16.294 }, 00:14:16.294 "multi_ctrlr": true, 00:14:16.294 "ana_reporting": false 00:14:16.294 }, 00:14:16.294 "vs": { 00:14:16.294 "nvme_version": "1.3" 00:14:16.294 }, 00:14:16.294 "ns_data": { 00:14:16.294 "id": 1, 00:14:16.294 "can_share": true 00:14:16.294 } 00:14:16.294 } 00:14:16.294 ], 00:14:16.294 "mp_policy": "active_passive" 00:14:16.294 } 00:14:16.294 } 00:14:16.294 ] 00:14:16.294 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3540249 00:14:16.294 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.294 23:56:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.294 Running I/O for 10 seconds... 00:14:17.673 Latency(us) 00:14:17.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.673 Nvme0n1 : 1.00 23869.00 93.24 0.00 0.00 0.00 0.00 0.00 00:14:17.673 =================================================================================================================== 00:14:17.673 Total : 23869.00 93.24 0.00 0.00 0.00 0.00 0.00 00:14:17.673 00:14:18.241 23:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:18.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.501 Nvme0n1 : 2.00 24094.50 94.12 0.00 0.00 0.00 0.00 0.00 00:14:18.501 =================================================================================================================== 00:14:18.501 Total : 24094.50 94.12 0.00 0.00 0.00 0.00 0.00 00:14:18.501 00:14:18.501 true 00:14:18.501 23:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:18.501 23:56:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:18.761 23:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:18.761 23:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:18.761 23:56:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3540249 00:14:19.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.329 Nvme0n1 : 3.00 24148.33 94.33 0.00 0.00 0.00 0.00 0.00 00:14:19.329 =================================================================================================================== 00:14:19.329 Total : 24148.33 94.33 0.00 0.00 0.00 0.00 0.00 00:14:19.329 00:14:20.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.266 Nvme0n1 : 4.00 24111.00 94.18 0.00 0.00 0.00 0.00 0.00 00:14:20.266 =================================================================================================================== 00:14:20.266 Total : 24111.00 94.18 0.00 0.00 0.00 0.00 0.00 00:14:20.266 00:14:21.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.644 Nvme0n1 : 5.00 24191.40 94.50 0.00 0.00 0.00 0.00 0.00 00:14:21.644 =================================================================================================================== 00:14:21.644 Total : 24191.40 94.50 0.00 0.00 0.00 0.00 0.00 00:14:21.644 00:14:22.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.290 Nvme0n1 : 6.00 24234.17 94.66 0.00 0.00 0.00 0.00 0.00 00:14:22.290 =================================================================================================================== 00:14:22.290 Total : 24234.17 94.66 0.00 0.00 0.00 0.00 0.00 00:14:22.290 00:14:23.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.683 Nvme0n1 : 7.00 24278.57 94.84 0.00 0.00 0.00 0.00 0.00 00:14:23.683 =================================================================================================================== 00:14:23.683 Total : 24278.57 94.84 0.00 0.00 0.00 0.00 0.00 00:14:23.683 00:14:24.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.618 Nvme0n1 : 8.00 24317.88 94.99 0.00 0.00 0.00 0.00 0.00 00:14:24.618 =================================================================================================================== 00:14:24.618 Total : 24317.88 94.99 0.00 0.00 0.00 0.00 0.00 00:14:24.618 00:14:25.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.553 Nvme0n1 : 9.00 24340.89 95.08 0.00 0.00 0.00 0.00 0.00 00:14:25.554 =================================================================================================================== 00:14:25.554 Total : 24340.89 95.08 0.00 0.00 0.00 0.00 0.00 00:14:25.554 00:14:26.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.490 Nvme0n1 : 10.00 24353.30 95.13 0.00 0.00 0.00 0.00 0.00 00:14:26.490 =================================================================================================================== 00:14:26.490 Total : 24353.30 95.13 0.00 0.00 0.00 0.00 0.00 00:14:26.490 00:14:26.490 00:14:26.490 Latency(us) 00:14:26.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.490 Nvme0n1 : 10.01 24356.70 95.14 0.00 0.00 5251.63 1861.22 16148.07 00:14:26.490 =================================================================================================================== 00:14:26.490 Total : 24356.70 95.14 0.00 0.00 5251.63 1861.22 16148.07 00:14:26.490 0 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3539979 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3539979 ']' 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3539979 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3539979 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3539979' 00:14:26.490 killing process with pid 3539979 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3539979 00:14:26.490 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.490 00:14:26.490 Latency(us) 00:14:26.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.490 =================================================================================================================== 00:14:26.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.490 23:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3539979 00:14:26.748 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.748 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.007 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:27.007 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3536678 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3536678 00:14:27.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3536678 Killed "${NVMF_APP[@]}" "$@" 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3542114 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3542114 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3542114 ']' 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.265 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.266 23:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:27.266 [2024-05-14 23:56:27.795001] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:27.266 [2024-05-14 23:56:27.795046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.266 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.524 [2024-05-14 23:56:27.867925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.524 [2024-05-14 23:56:27.944327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.524 [2024-05-14 23:56:27.944362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.524 [2024-05-14 23:56:27.944371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.524 [2024-05-14 23:56:27.944380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.524 [2024-05-14 23:56:27.944387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.524 [2024-05-14 23:56:27.944417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.092 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.351 [2024-05-14 23:56:28.793970] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:28.351 [2024-05-14 23:56:28.794062] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:28.351 [2024-05-14 23:56:28.794092] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:28.351 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.609 23:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 -t 2000 00:14:28.609 [ 00:14:28.609 { 00:14:28.609 "name": "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94", 00:14:28.609 "aliases": [ 00:14:28.609 "lvs/lvol" 00:14:28.609 ], 00:14:28.609 "product_name": "Logical Volume", 00:14:28.609 "block_size": 4096, 00:14:28.609 "num_blocks": 38912, 00:14:28.609 "uuid": "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94", 00:14:28.609 "assigned_rate_limits": { 00:14:28.609 "rw_ios_per_sec": 0, 00:14:28.609 "rw_mbytes_per_sec": 0, 00:14:28.609 "r_mbytes_per_sec": 0, 00:14:28.609 "w_mbytes_per_sec": 0 00:14:28.609 }, 00:14:28.609 "claimed": false, 00:14:28.609 "zoned": false, 00:14:28.609 "supported_io_types": { 00:14:28.609 "read": true, 00:14:28.609 "write": true, 00:14:28.609 "unmap": true, 00:14:28.609 "write_zeroes": true, 00:14:28.609 "flush": false, 00:14:28.609 "reset": true, 00:14:28.609 "compare": false, 00:14:28.609 "compare_and_write": false, 00:14:28.609 "abort": false, 00:14:28.609 "nvme_admin": false, 00:14:28.610 "nvme_io": false 00:14:28.610 }, 00:14:28.610 "driver_specific": { 00:14:28.610 "lvol": { 00:14:28.610 "lvol_store_uuid": "428f803d-72e5-438c-9599-c1c4551f2b1f", 00:14:28.610 "base_bdev": "aio_bdev", 00:14:28.610 "thin_provision": false, 00:14:28.610 "num_allocated_clusters": 38, 00:14:28.610 "snapshot": false, 00:14:28.610 "clone": false, 00:14:28.610 "esnap_clone": false 00:14:28.610 } 00:14:28.610 } 00:14:28.610 } 00:14:28.610 ] 00:14:28.610 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:28.610 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:28.610 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:28.868 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:28.868 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:28.868 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:29.126 [2024-05-14 23:56:29.642129] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.126 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.127 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.127 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:29.127 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:29.385 request: 00:14:29.385 { 00:14:29.385 "uuid": "428f803d-72e5-438c-9599-c1c4551f2b1f", 00:14:29.385 "method": "bdev_lvol_get_lvstores", 00:14:29.385 "req_id": 1 00:14:29.385 } 00:14:29.385 Got JSON-RPC error response 00:14:29.385 response: 00:14:29.385 { 00:14:29.385 "code": -19, 00:14:29.385 "message": "No such device" 00:14:29.385 } 00:14:29.385 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:29.385 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.385 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.385 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.385 23:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.643 aio_bdev 00:14:29.643 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:29.643 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:29.643 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:29.643 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:29.643 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:29.644 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:29.644 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:29.644 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 -t 2000 00:14:29.902 [ 00:14:29.902 { 00:14:29.902 "name": "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94", 00:14:29.902 "aliases": [ 00:14:29.902 "lvs/lvol" 00:14:29.902 ], 00:14:29.902 "product_name": "Logical Volume", 00:14:29.902 "block_size": 4096, 00:14:29.902 "num_blocks": 38912, 00:14:29.902 "uuid": "9dcf540a-1d56-4b73-b7ec-79f1e8c9be94", 00:14:29.902 "assigned_rate_limits": { 00:14:29.902 "rw_ios_per_sec": 0, 00:14:29.902 "rw_mbytes_per_sec": 0, 00:14:29.902 "r_mbytes_per_sec": 0, 00:14:29.902 "w_mbytes_per_sec": 0 00:14:29.902 }, 00:14:29.902 "claimed": false, 00:14:29.902 "zoned": false, 00:14:29.902 "supported_io_types": { 00:14:29.902 "read": true, 00:14:29.902 "write": true, 00:14:29.902 "unmap": true, 00:14:29.902 "write_zeroes": true, 00:14:29.902 "flush": false, 00:14:29.902 "reset": true, 00:14:29.902 "compare": false, 00:14:29.902 "compare_and_write": false, 00:14:29.902 "abort": false, 00:14:29.902 "nvme_admin": false, 00:14:29.902 "nvme_io": false 00:14:29.902 }, 00:14:29.902 "driver_specific": { 00:14:29.902 "lvol": { 00:14:29.902 "lvol_store_uuid": "428f803d-72e5-438c-9599-c1c4551f2b1f", 00:14:29.902 "base_bdev": "aio_bdev", 00:14:29.902 "thin_provision": false, 00:14:29.902 "num_allocated_clusters": 38, 00:14:29.902 "snapshot": false, 00:14:29.902 "clone": false, 00:14:29.902 "esnap_clone": false 00:14:29.902 } 00:14:29.902 } 00:14:29.902 } 00:14:29.902 ] 00:14:29.902 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:29.902 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:29.902 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:30.160 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:30.160 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:30.160 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:30.160 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:30.160 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9dcf540a-1d56-4b73-b7ec-79f1e8c9be94 00:14:30.419 23:56:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 428f803d-72e5-438c-9599-c1c4551f2b1f 00:14:30.678 23:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.679 23:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.939 00:14:30.939 real 0m17.568s 00:14:30.939 user 0m43.909s 00:14:30.939 sys 0m4.738s 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:30.939 ************************************ 00:14:30.939 END TEST lvs_grow_dirty 00:14:30.939 ************************************ 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.939 nvmf_trace.0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.939 rmmod nvme_tcp 00:14:30.939 rmmod nvme_fabrics 00:14:30.939 rmmod nvme_keyring 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3542114 ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3542114 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3542114 ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3542114 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3542114 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3542114' 00:14:30.939 killing process with pid 3542114 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3542114 00:14:30.939 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3542114 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.197 23:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.788 23:56:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.788 00:14:33.788 real 0m44.008s 00:14:33.788 user 1m4.841s 00:14:33.788 sys 0m12.571s 00:14:33.788 23:56:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.788 23:56:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:33.788 ************************************ 00:14:33.788 END TEST nvmf_lvs_grow 00:14:33.788 ************************************ 00:14:33.788 23:56:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.788 23:56:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:33.788 23:56:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.788 23:56:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.788 ************************************ 00:14:33.788 START TEST nvmf_bdev_io_wait 00:14:33.788 ************************************ 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.788 * Looking for test storage... 00:14:33.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.788 23:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.788 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.789 23:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:40.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:40.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:40.354 Found net devices under 0000:af:00.0: cvl_0_0 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:40.354 Found net devices under 0000:af:00.1: cvl_0_1 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:14:40.354 00:14:40.354 --- 10.0.0.2 ping statistics --- 00:14:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.354 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:14:40.354 00:14:40.354 --- 10.0.0.1 ping statistics --- 00:14:40.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.354 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3546419 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3546419 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3546419 ']' 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.354 23:56:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.354 [2024-05-14 23:56:40.859515] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:40.354 [2024-05-14 23:56:40.859559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.354 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.354 [2024-05-14 23:56:40.931708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.615 [2024-05-14 23:56:41.007008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.615 [2024-05-14 23:56:41.007048] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.615 [2024-05-14 23:56:41.007058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.615 [2024-05-14 23:56:41.007067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.615 [2024-05-14 23:56:41.007074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.615 [2024-05-14 23:56:41.007143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.615 [2024-05-14 23:56:41.007243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.615 [2024-05-14 23:56:41.007267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.615 [2024-05-14 23:56:41.007268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.182 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 [2024-05-14 23:56:41.769666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.441 Malloc0 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.441 [2024-05-14 23:56:41.839199] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.441 [2024-05-14 23:56:41.839480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3546703 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3546705 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.441 { 00:14:41.441 "params": { 00:14:41.441 "name": "Nvme$subsystem", 00:14:41.441 "trtype": "$TEST_TRANSPORT", 00:14:41.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.441 "adrfam": "ipv4", 00:14:41.441 "trsvcid": "$NVMF_PORT", 00:14:41.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.441 "hdgst": ${hdgst:-false}, 00:14:41.441 "ddgst": ${ddgst:-false} 00:14:41.441 }, 00:14:41.441 "method": "bdev_nvme_attach_controller" 00:14:41.441 } 00:14:41.441 EOF 00:14:41.441 )") 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3546707 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.441 { 00:14:41.441 "params": { 00:14:41.441 "name": "Nvme$subsystem", 00:14:41.441 "trtype": "$TEST_TRANSPORT", 00:14:41.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.441 "adrfam": "ipv4", 00:14:41.441 "trsvcid": "$NVMF_PORT", 00:14:41.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.441 "hdgst": ${hdgst:-false}, 00:14:41.441 "ddgst": ${ddgst:-false} 00:14:41.441 }, 00:14:41.441 "method": "bdev_nvme_attach_controller" 00:14:41.441 } 00:14:41.441 EOF 00:14:41.441 )") 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3546710 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.441 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.441 { 00:14:41.441 "params": { 00:14:41.441 "name": "Nvme$subsystem", 00:14:41.441 "trtype": "$TEST_TRANSPORT", 00:14:41.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.441 "adrfam": "ipv4", 00:14:41.441 "trsvcid": "$NVMF_PORT", 00:14:41.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.441 "hdgst": ${hdgst:-false}, 00:14:41.441 "ddgst": ${ddgst:-false} 00:14:41.441 }, 00:14:41.441 "method": "bdev_nvme_attach_controller" 00:14:41.441 } 00:14:41.442 EOF 00:14:41.442 )") 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.442 { 00:14:41.442 "params": { 00:14:41.442 "name": "Nvme$subsystem", 00:14:41.442 "trtype": "$TEST_TRANSPORT", 00:14:41.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "$NVMF_PORT", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.442 "hdgst": ${hdgst:-false}, 00:14:41.442 "ddgst": ${ddgst:-false} 00:14:41.442 }, 00:14:41.442 "method": "bdev_nvme_attach_controller" 00:14:41.442 } 00:14:41.442 EOF 00:14:41.442 )") 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3546703 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.442 "params": { 00:14:41.442 "name": "Nvme1", 00:14:41.442 "trtype": "tcp", 00:14:41.442 "traddr": "10.0.0.2", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "4420", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.442 "hdgst": false, 00:14:41.442 "ddgst": false 00:14:41.442 }, 00:14:41.442 "method": "bdev_nvme_attach_controller" 00:14:41.442 }' 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.442 "params": { 00:14:41.442 "name": "Nvme1", 00:14:41.442 "trtype": "tcp", 00:14:41.442 "traddr": "10.0.0.2", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "4420", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.442 "hdgst": false, 00:14:41.442 "ddgst": false 00:14:41.442 }, 00:14:41.442 "method": "bdev_nvme_attach_controller" 00:14:41.442 }' 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.442 "params": { 00:14:41.442 "name": "Nvme1", 00:14:41.442 "trtype": "tcp", 00:14:41.442 "traddr": "10.0.0.2", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "4420", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.442 "hdgst": false, 00:14:41.442 "ddgst": false 00:14:41.442 }, 00:14:41.442 "method": "bdev_nvme_attach_controller" 00:14:41.442 }' 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:41.442 23:56:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.442 "params": { 00:14:41.442 "name": "Nvme1", 00:14:41.442 "trtype": "tcp", 00:14:41.442 "traddr": "10.0.0.2", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "4420", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.442 "hdgst": false, 00:14:41.442 "ddgst": false 00:14:41.442 }, 00:14:41.442 "method": "bdev_nvme_attach_controller" 00:14:41.442 }' 00:14:41.442 [2024-05-14 23:56:41.891719] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:41.442 [2024-05-14 23:56:41.891776] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:41.442 [2024-05-14 23:56:41.892600] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:41.442 [2024-05-14 23:56:41.892649] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:41.442 [2024-05-14 23:56:41.892776] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:41.442 [2024-05-14 23:56:41.892787] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:41.442 [2024-05-14 23:56:41.892820] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:41.442 [2024-05-14 23:56:41.892829] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:41.442 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.442 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.699 [2024-05-14 23:56:42.078311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.699 [2024-05-14 23:56:42.150920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:41.699 [2024-05-14 23:56:42.169656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.699 [2024-05-14 23:56:42.244999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:41.699 [2024-05-14 23:56:42.270550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.957 [2024-05-14 23:56:42.314664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.957 [2024-05-14 23:56:42.356573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:41.957 [2024-05-14 23:56:42.388039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:41.957 Running I/O for 1 seconds... 00:14:41.957 Running I/O for 1 seconds... 00:14:42.215 Running I/O for 1 seconds... 00:14:42.215 Running I/O for 1 seconds... 00:14:43.151 00:14:43.151 Latency(us) 00:14:43.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.151 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:43.151 Nvme1n1 : 1.00 14168.61 55.35 0.00 0.00 9008.07 4823.45 25060.97 00:14:43.151 =================================================================================================================== 00:14:43.151 Total : 14168.61 55.35 0.00 0.00 9008.07 4823.45 25060.97 00:14:43.151 00:14:43.151 Latency(us) 00:14:43.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.151 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:43.151 Nvme1n1 : 1.01 6993.61 27.32 0.00 0.00 18190.53 7707.03 24536.68 00:14:43.151 =================================================================================================================== 00:14:43.151 Total : 6993.61 27.32 0.00 0.00 18190.53 7707.03 24536.68 00:14:43.151 00:14:43.151 Latency(us) 00:14:43.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:43.151 Nvme1n1 : 1.00 257774.27 1006.93 0.00 0.00 494.45 205.62 655.36 00:14:43.151 =================================================================================================================== 00:14:43.151 Total : 257774.27 1006.93 0.00 0.00 494.45 205.62 655.36 00:14:43.151 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3546705 00:14:43.151 00:14:43.151 Latency(us) 00:14:43.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.151 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:43.151 Nvme1n1 : 1.01 7579.15 29.61 0.00 0.00 16830.27 6579.81 41313.89 00:14:43.151 =================================================================================================================== 00:14:43.151 Total : 7579.15 29.61 0.00 0.00 16830.27 6579.81 41313.89 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3546707 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3546710 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.409 23:56:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.409 rmmod nvme_tcp 00:14:43.409 rmmod nvme_fabrics 00:14:43.409 rmmod nvme_keyring 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3546419 ']' 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3546419 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3546419 ']' 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3546419 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3546419 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3546419' 00:14:43.667 killing process with pid 3546419 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3546419 00:14:43.667 [2024-05-14 23:56:44.072176] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:43.667 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3546419 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.926 23:56:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.827 23:56:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.827 00:14:45.827 real 0m12.463s 00:14:45.827 user 0m20.120s 00:14:45.827 sys 0m7.139s 00:14:45.827 23:56:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.827 23:56:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:45.827 ************************************ 00:14:45.827 END TEST nvmf_bdev_io_wait 00:14:45.827 ************************************ 00:14:45.827 23:56:46 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:45.827 23:56:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:45.827 23:56:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.827 23:56:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 ************************************ 00:14:46.086 START TEST nvmf_queue_depth 00:14:46.086 ************************************ 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:46.086 * Looking for test storage... 00:14:46.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.086 23:56:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.087 23:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:52.652 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:52.652 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:52.652 Found net devices under 0000:af:00.0: cvl_0_0 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.652 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:52.653 Found net devices under 0000:af:00.1: cvl_0_1 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:14:52.653 00:14:52.653 --- 10.0.0.2 ping statistics --- 00:14:52.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.653 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:14:52.653 00:14:52.653 --- 10.0.0.1 ping statistics --- 00:14:52.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.653 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3550691 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3550691 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3550691 ']' 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.653 23:56:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:52.653 [2024-05-14 23:56:53.043623] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:52.653 [2024-05-14 23:56:53.043670] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.653 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.653 [2024-05-14 23:56:53.117146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.653 [2024-05-14 23:56:53.189453] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.653 [2024-05-14 23:56:53.189489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.653 [2024-05-14 23:56:53.189498] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.653 [2024-05-14 23:56:53.189507] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.653 [2024-05-14 23:56:53.189514] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.653 [2024-05-14 23:56:53.189539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 [2024-05-14 23:56:53.884338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 Malloc0 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.589 [2024-05-14 23:56:53.944350] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:53.589 [2024-05-14 23:56:53.944575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.589 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3550968 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3550968 /var/tmp/bdevperf.sock 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3550968 ']' 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.590 23:56:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.590 [2024-05-14 23:56:53.996781] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:14:53.590 [2024-05-14 23:56:53.996826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550968 ] 00:14:53.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.590 [2024-05-14 23:56:54.065923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.590 [2024-05-14 23:56:54.134520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:54.526 NVMe0n1 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.526 23:56:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.526 Running I/O for 10 seconds... 00:15:04.500 00:15:04.500 Latency(us) 00:15:04.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.500 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:04.500 Verification LBA range: start 0x0 length 0x4000 00:15:04.500 NVMe0n1 : 10.05 12971.31 50.67 0.00 0.00 78670.21 9489.61 55364.81 00:15:04.500 =================================================================================================================== 00:15:04.500 Total : 12971.31 50.67 0.00 0.00 78670.21 9489.61 55364.81 00:15:04.500 0 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3550968 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3550968 ']' 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3550968 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:04.500 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3550968 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3550968' 00:15:04.759 killing process with pid 3550968 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3550968 00:15:04.759 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.759 00:15:04.759 Latency(us) 00:15:04.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.759 =================================================================================================================== 00:15:04.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3550968 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.759 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.759 rmmod nvme_tcp 00:15:05.029 rmmod nvme_fabrics 00:15:05.029 rmmod nvme_keyring 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3550691 ']' 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3550691 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3550691 ']' 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3550691 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3550691 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3550691' 00:15:05.029 killing process with pid 3550691 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3550691 00:15:05.029 [2024-05-14 23:57:05.463358] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.029 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3550691 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.358 23:57:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.268 23:57:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.268 00:15:07.268 real 0m21.327s 00:15:07.268 user 0m24.621s 00:15:07.268 sys 0m6.861s 00:15:07.268 23:57:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.268 23:57:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:07.268 ************************************ 00:15:07.268 END TEST nvmf_queue_depth 00:15:07.268 ************************************ 00:15:07.268 23:57:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:07.268 23:57:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:07.268 23:57:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.268 23:57:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.528 ************************************ 00:15:07.528 START TEST nvmf_target_multipath 00:15:07.528 ************************************ 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:07.528 * Looking for test storage... 00:15:07.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.528 23:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.528 23:57:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.529 23:57:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:14.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:14.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:14.098 Found net devices under 0000:af:00.0: cvl_0_0 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:14.098 Found net devices under 0000:af:00.1: cvl_0_1 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:15:14.098 00:15:14.098 --- 10.0.0.2 ping statistics --- 00:15:14.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.098 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:15:14.098 00:15:14.098 --- 10.0.0.1 ping statistics --- 00:15:14.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.098 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.098 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:14.357 only one NIC for nvmf test 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:14.357 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.358 rmmod nvme_tcp 00:15:14.358 rmmod nvme_fabrics 00:15:14.358 rmmod nvme_keyring 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.358 23:57:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.263 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.521 00:15:16.521 real 0m9.023s 00:15:16.521 user 0m1.851s 00:15:16.521 sys 0m5.206s 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.521 23:57:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:16.521 ************************************ 00:15:16.521 END TEST nvmf_target_multipath 00:15:16.521 ************************************ 00:15:16.521 23:57:16 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.521 23:57:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:16.521 23:57:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.521 23:57:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.521 ************************************ 00:15:16.521 START TEST nvmf_zcopy 00:15:16.521 ************************************ 00:15:16.521 23:57:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.521 * Looking for test storage... 00:15:16.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.521 23:57:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.522 23:57:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:23.088 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:23.088 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.088 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:23.089 Found net devices under 0000:af:00.0: cvl_0_0 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:23.089 Found net devices under 0000:af:00.1: cvl_0_1 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:23.089 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:15:23.348 00:15:23.348 --- 10.0.0.2 ping statistics --- 00:15:23.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.348 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:15:23.348 00:15:23.348 --- 10.0.0.1 ping statistics --- 00:15:23.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.348 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3560767 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3560767 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3560767 ']' 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:23.348 23:57:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:23.348 [2024-05-14 23:57:23.834156] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:23.348 [2024-05-14 23:57:23.834219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.348 [2024-05-14 23:57:23.907805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.607 [2024-05-14 23:57:23.974566] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.607 [2024-05-14 23:57:23.974606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.607 [2024-05-14 23:57:23.974615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.607 [2024-05-14 23:57:23.974623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.607 [2024-05-14 23:57:23.974630] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.607 [2024-05-14 23:57:23.974658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 [2024-05-14 23:57:24.669026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 [2024-05-14 23:57:24.693033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:24.175 [2024-05-14 23:57:24.693258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 malloc0 00:15:24.175 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:24.176 { 00:15:24.176 "params": { 00:15:24.176 "name": "Nvme$subsystem", 00:15:24.176 "trtype": "$TEST_TRANSPORT", 00:15:24.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.176 "adrfam": "ipv4", 00:15:24.176 "trsvcid": "$NVMF_PORT", 00:15:24.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.176 "hdgst": ${hdgst:-false}, 00:15:24.176 "ddgst": ${ddgst:-false} 00:15:24.176 }, 00:15:24.176 "method": "bdev_nvme_attach_controller" 00:15:24.176 } 00:15:24.176 EOF 00:15:24.176 )") 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:24.176 23:57:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:24.176 "params": { 00:15:24.176 "name": "Nvme1", 00:15:24.176 "trtype": "tcp", 00:15:24.176 "traddr": "10.0.0.2", 00:15:24.176 "adrfam": "ipv4", 00:15:24.176 "trsvcid": "4420", 00:15:24.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.176 "hdgst": false, 00:15:24.176 "ddgst": false 00:15:24.176 }, 00:15:24.176 "method": "bdev_nvme_attach_controller" 00:15:24.176 }' 00:15:24.435 [2024-05-14 23:57:24.772138] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:24.435 [2024-05-14 23:57:24.772184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560861 ] 00:15:24.435 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.435 [2024-05-14 23:57:24.841285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.435 [2024-05-14 23:57:24.910107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.694 Running I/O for 10 seconds... 00:15:34.674 00:15:34.674 Latency(us) 00:15:34.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.674 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:34.674 Verification LBA range: start 0x0 length 0x1000 00:15:34.674 Nvme1n1 : 10.01 8785.65 68.64 0.00 0.00 14529.10 1835.01 45088.77 00:15:34.674 =================================================================================================================== 00:15:34.674 Total : 8785.65 68.64 0.00 0.00 14529.10 1835.01 45088.77 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3562661 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:34.934 { 00:15:34.934 "params": { 00:15:34.934 "name": "Nvme$subsystem", 00:15:34.934 "trtype": "$TEST_TRANSPORT", 00:15:34.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.934 "adrfam": "ipv4", 00:15:34.934 "trsvcid": "$NVMF_PORT", 00:15:34.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.934 "hdgst": ${hdgst:-false}, 00:15:34.934 "ddgst": ${ddgst:-false} 00:15:34.934 }, 00:15:34.934 "method": "bdev_nvme_attach_controller" 00:15:34.934 } 00:15:34.934 EOF 00:15:34.934 )") 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:34.934 [2024-05-14 23:57:35.466575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.934 [2024-05-14 23:57:35.466611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:34.934 23:57:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:34.934 "params": { 00:15:34.934 "name": "Nvme1", 00:15:34.934 "trtype": "tcp", 00:15:34.934 "traddr": "10.0.0.2", 00:15:34.934 "adrfam": "ipv4", 00:15:34.934 "trsvcid": "4420", 00:15:34.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.934 "hdgst": false, 00:15:34.934 "ddgst": false 00:15:34.934 }, 00:15:34.934 "method": "bdev_nvme_attach_controller" 00:15:34.934 }' 00:15:34.934 [2024-05-14 23:57:35.478565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.934 [2024-05-14 23:57:35.478579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.934 [2024-05-14 23:57:35.490594] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.934 [2024-05-14 23:57:35.490606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.934 [2024-05-14 23:57:35.502624] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.934 [2024-05-14 23:57:35.502636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.934 [2024-05-14 23:57:35.507503] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:34.934 [2024-05-14 23:57:35.507555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562661 ] 00:15:34.934 [2024-05-14 23:57:35.514657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.934 [2024-05-14 23:57:35.514670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.193 [2024-05-14 23:57:35.526690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.526702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.194 [2024-05-14 23:57:35.538720] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.538732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.550752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.550768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.562783] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.562795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.574816] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.574828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.577490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.194 [2024-05-14 23:57:35.586851] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.586865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.598879] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.598891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.610912] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.610929] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.622946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.622966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.634978] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.634991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.646392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.194 [2024-05-14 23:57:35.647022] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.647041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.659050] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.659067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.671080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.671099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.683112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.683127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.695140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.695154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.707171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.707187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.719206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.719219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.731254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.731275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.743272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.743288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.755306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.755323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.767343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.767363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.194 [2024-05-14 23:57:35.779383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.194 [2024-05-14 23:57:35.779398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.791422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.791442] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 Running I/O for 5 seconds... 00:15:35.453 [2024-05-14 23:57:35.803460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.803472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.827912] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.827934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.843658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.843680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.857586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.857608] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.872065] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.872086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.887988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.888008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.902089] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.902110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.915603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.915625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.929220] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.929241] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.942977] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.942997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.956552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.956572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.970714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.970734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.982079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.982099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:35.996408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:35.996427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:36.009603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:36.009623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:36.023108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:36.023128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.453 [2024-05-14 23:57:36.036936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.453 [2024-05-14 23:57:36.036956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.050402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.050422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.063846] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.063866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.077545] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.077566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.091136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.091156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.104843] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.104864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.118903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.118924] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.130855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.130875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.144260] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.144281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.157368] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.157389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.171651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.171671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.190388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.190407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.206112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.206131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.219841] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.219861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.233516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.233535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.249558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.249579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.263375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.263395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.276977] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.276997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.288835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.288854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.713 [2024-05-14 23:57:36.303036] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.713 [2024-05-14 23:57:36.303057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.971 [2024-05-14 23:57:36.316614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.971 [2024-05-14 23:57:36.316634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.971 [2024-05-14 23:57:36.333716] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.971 [2024-05-14 23:57:36.333736] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.971 [2024-05-14 23:57:36.345739] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.971 [2024-05-14 23:57:36.345760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.971 [2024-05-14 23:57:36.359917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.971 [2024-05-14 23:57:36.359937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.971 [2024-05-14 23:57:36.373343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.373362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.387658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.387678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.402862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.402882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.416520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.416540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.430202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.430238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.443523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.443544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.457006] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.457026] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.470855] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.470875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.485108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.485128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.496683] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.496702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.510881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.510900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.526213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.526232] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.537243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.537262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.972 [2024-05-14 23:57:36.553123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.972 [2024-05-14 23:57:36.553148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.567441] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.567461] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.586773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.586796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.601930] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.601949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.621227] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.621247] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.636433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.636453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.650693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.650714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.664319] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.664339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.677584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.677605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.691869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.691889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.702220] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.702243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.716693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.716712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.728945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.728965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.749103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.749123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.762795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.762815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.779132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.779151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.793486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.793506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.807172] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.807197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.231 [2024-05-14 23:57:36.820775] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.231 [2024-05-14 23:57:36.820795] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.489 [2024-05-14 23:57:36.834996] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.489 [2024-05-14 23:57:36.835019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.489 [2024-05-14 23:57:36.850664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.489 [2024-05-14 23:57:36.850685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.864836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.864856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.875309] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.875328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.889075] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.889095] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.902566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.902585] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.916056] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.916076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.932999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.933019] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.947040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.947061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.960718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.960738] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.974833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.974853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:36.986388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:36.986408] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.004361] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.004381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.017229] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.017250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.030950] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.030970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.044827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.044847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.058490] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.058510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.490 [2024-05-14 23:57:37.071621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.490 [2024-05-14 23:57:37.071640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.086203] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.086223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.103810] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.103833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.118934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.118955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.130244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.130265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.143923] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.143944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.157478] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.157499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.171387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.171407] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.185430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.185452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.196712] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.196733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.210656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.210677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.224097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.224119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.237763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.237783] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.251524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.251544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.265332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.265363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.279015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.279035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.292782] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.292803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.306348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.306368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.319842] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.319862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.748 [2024-05-14 23:57:37.333489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.748 [2024-05-14 23:57:37.333509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.347275] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.347296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.360843] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.360867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.374727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.374747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.388272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.388292] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.402117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.402137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.415707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.415728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.429290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.429310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.442563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.442584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.455984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.456004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.469258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.469279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.482629] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.482651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.496633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.496655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.507621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.507641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.521538] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.521558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.535205] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.535226] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.549223] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.549244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.562872] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.562893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.576351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.576371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.008 [2024-05-14 23:57:37.589804] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.008 [2024-05-14 23:57:37.589825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.603285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.603305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.617069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.617093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.630883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.630905] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.644467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.644487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.658119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.658139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.671750] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.671770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.684890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.684910] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.698177] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.698207] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.711962] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.711982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.725392] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.725413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.739104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.739124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.752970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.752990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.766166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.766185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.779514] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.779533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.793252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.793272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.806946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.806966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.820347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.820367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.834165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.834185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.267 [2024-05-14 23:57:37.847495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.267 [2024-05-14 23:57:37.847515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.860956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.860976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.874918] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.874938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.888824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.888844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.900570] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.900589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.914260] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.914280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.928144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.928163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.939228] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.939248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.954285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.954305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.969551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.969571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.983691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.983711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:37.999042] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:37.999061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.015010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.015029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.029744] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.029763] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.043857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.043877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.054950] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.054970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.069066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.069086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.082961] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.082982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.096558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.096578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.526 [2024-05-14 23:57:38.109878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.526 [2024-05-14 23:57:38.109897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.123484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.123504] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.137430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.137450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.151214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.151250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.165218] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.165238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.179273] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.179293] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.193166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.193185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.208947] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.208967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.223292] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.223311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.238896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.238916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.252572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.252593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.266209] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.266245] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.279818] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.279838] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.293891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.293911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.307604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.307623] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.322040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.322060] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.337847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.337866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.784 [2024-05-14 23:57:38.351396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.784 [2024-05-14 23:57:38.351415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.785 [2024-05-14 23:57:38.365381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.785 [2024-05-14 23:57:38.365400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.380946] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.380966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.394787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.394808] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.406289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.406308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.420914] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.420934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.436403] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.436424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.451760] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.451780] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.467331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.467350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.482458] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.482478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.495798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.495819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.509539] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.509561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.524586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.524605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.539132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.539152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.554184] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.554208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.569576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.569597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.581655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.581675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.595155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.595175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.609080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.609100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.044 [2024-05-14 23:57:38.620763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.044 [2024-05-14 23:57:38.620782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.638125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.638144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.651679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.651699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.665715] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.665738] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.677104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.677124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.691354] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.691373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.703013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.703032] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.716002] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.716022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.730180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.730208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.744101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.744122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.757476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.757497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.771429] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.771449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.785240] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.785261] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.798900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.798921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.812551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.812572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.825721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.825741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.839609] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.839630] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.853376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.853397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.867313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.867333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.304 [2024-05-14 23:57:38.880776] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.304 [2024-05-14 23:57:38.880796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.305 [2024-05-14 23:57:38.893945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.305 [2024-05-14 23:57:38.893966] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.907963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.907983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.921753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.921777] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.933381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.933402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.947050] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.947070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.960468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.960489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.973883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.973904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:38.987187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:38.987213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.000762] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.000782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.014301] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.014321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.028793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.028813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.044149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.044169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.057909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.057934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.071443] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.071463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.085267] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.085287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.097069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.097089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.564 [2024-05-14 23:57:39.110692] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.564 [2024-05-14 23:57:39.110712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.565 [2024-05-14 23:57:39.124290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.565 [2024-05-14 23:57:39.124310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.565 [2024-05-14 23:57:39.138019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.565 [2024-05-14 23:57:39.138040] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.565 [2024-05-14 23:57:39.151872] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.565 [2024-05-14 23:57:39.151893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.165006] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.165025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.178871] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.178895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.192653] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.192673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.206378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.206398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.220077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.220098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.233906] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.233926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.245271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.245292] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.824 [2024-05-14 23:57:39.258915] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.824 [2024-05-14 23:57:39.258935] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.273070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.273090] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.289127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.289148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.302556] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.302576] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.316359] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.316380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.329747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.329767] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.343285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.343304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.357165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.357185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.370728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.370748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.384313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.384334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.398469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.398489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.825 [2024-05-14 23:57:39.410476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.825 [2024-05-14 23:57:39.410495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.084 [2024-05-14 23:57:39.423862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.084 [2024-05-14 23:57:39.423881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.084 [2024-05-14 23:57:39.437599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.084 [2024-05-14 23:57:39.437622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.084 [2024-05-14 23:57:39.451053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.084 [2024-05-14 23:57:39.451073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.464739] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.464759] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.477793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.477813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.491262] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.491282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.504295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.504315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.518221] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.518242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.532252] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.532274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.546338] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.546357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.557488] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.557508] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.570854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.570873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.584685] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.584704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.595080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.595100] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.608854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.608875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.622741] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.622761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.636698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.636717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.648002] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.648021] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.662116] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.662136] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.085 [2024-05-14 23:57:39.675592] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.085 [2024-05-14 23:57:39.675612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.689373] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.689400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.703287] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.703307] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.716970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.716990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.730123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.730143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.743829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.743849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.757225] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.757246] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.771024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.771044] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.785508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.785528] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.803137] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.803156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.818149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.818169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.829418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.829438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.843577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.843597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.857767] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.857788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.868535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.868555] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.882633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.882651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.895954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.895973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.909481] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.909501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.344 [2024-05-14 23:57:39.923931] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.344 [2024-05-14 23:57:39.923950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:39.939375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:39.939395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:39.953407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:39.953427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:39.967503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:39.967521] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:39.986474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:39.986494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.001329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.001349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.015857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.015880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.030678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.030698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.045406] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.045427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.060388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.060409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.074658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.074679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.088721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.088741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.099579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.099599] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.115601] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.115621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.131043] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.131062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.142143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.142163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.156276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.156296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.170313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.170333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.603 [2024-05-14 23:57:40.185296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.603 [2024-05-14 23:57:40.185315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.199223] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.199243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.212517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.212536] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.226332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.226351] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.240111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.240132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.251079] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.251099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.265349] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.265369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.280106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.280125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.301162] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.301184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.316458] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.316479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.328415] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.328435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.341960] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.341981] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.355090] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.355112] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.368994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.369015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.380863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.380884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.395347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.395368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.410792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.410813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.424362] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.424383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.437634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.437655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.861 [2024-05-14 23:57:40.451535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.861 [2024-05-14 23:57:40.451556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.462801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.462821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.476404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.476425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.489891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.489912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.504040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.504060] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.519530] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.519551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.534100] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.534121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.545590] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.545612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.559576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.559597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.573331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.573353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.587103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.587123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.600525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.600545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.614263] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.614283] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.628061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.628082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.641810] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.641831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.655540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.655561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.669298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.669322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.682963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.682983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.696160] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.696181] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.121 [2024-05-14 23:57:40.710150] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.121 [2024-05-14 23:57:40.710170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.723877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.723898] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.738148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.738172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.750008] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.750028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.763643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.763664] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.777058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.777079] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.790774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.790796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.804528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.804548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.817956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.817977] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 00:15:40.381 Latency(us) 00:15:40.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.381 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:40.381 Nvme1n1 : 5.01 16815.50 131.37 0.00 0.00 7604.59 2437.94 28521.27 00:15:40.381 =================================================================================================================== 00:15:40.381 Total : 16815.50 131.37 0.00 0.00 7604.59 2437.94 28521.27 00:15:40.381 [2024-05-14 23:57:40.827459] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.827478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.839486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.839502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.851526] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.851543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.863557] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.863577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.875584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.875598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.887611] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.887625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.899642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.899657] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.911678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.911696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.923706] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.923720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.935734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.935751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.947766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.947778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.959801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.959813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.381 [2024-05-14 23:57:40.971832] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.381 [2024-05-14 23:57:40.971844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.671 [2024-05-14 23:57:40.983863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.671 [2024-05-14 23:57:40.983874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.671 [2024-05-14 23:57:40.995895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.671 [2024-05-14 23:57:40.995907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.671 [2024-05-14 23:57:41.007928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.671 [2024-05-14 23:57:41.007944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.671 [2024-05-14 23:57:41.019957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.671 [2024-05-14 23:57:41.019969] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3562661) - No such process 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3562661 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:40.671 delay0 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.671 23:57:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:40.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.671 [2024-05-14 23:57:41.159539] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:47.290 Initializing NVMe Controllers 00:15:47.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.290 Initialization complete. Launching workers. 00:15:47.291 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 86 00:15:47.291 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 365, failed to submit 41 00:15:47.291 success 170, unsuccess 195, failed 0 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.291 rmmod nvme_tcp 00:15:47.291 rmmod nvme_fabrics 00:15:47.291 rmmod nvme_keyring 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3560767 ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3560767 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3560767 ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3560767 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3560767 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3560767' 00:15:47.291 killing process with pid 3560767 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3560767 00:15:47.291 [2024-05-14 23:57:47.442261] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3560767 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.291 23:57:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.194 23:57:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.194 00:15:49.194 real 0m32.754s 00:15:49.194 user 0m41.933s 00:15:49.194 sys 0m13.341s 00:15:49.194 23:57:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:49.194 23:57:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.194 ************************************ 00:15:49.194 END TEST nvmf_zcopy 00:15:49.194 ************************************ 00:15:49.194 23:57:49 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:49.194 23:57:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:49.194 23:57:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:49.194 23:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.453 ************************************ 00:15:49.453 START TEST nvmf_nmic 00:15:49.453 ************************************ 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:49.453 * Looking for test storage... 00:15:49.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.453 23:57:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:56.017 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:56.017 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:56.017 Found net devices under 0000:af:00.0: cvl_0_0 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:56.017 Found net devices under 0000:af:00.1: cvl_0_1 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.017 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:15:56.276 00:15:56.276 --- 10.0.0.2 ping statistics --- 00:15:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.276 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:15:56.276 00:15:56.276 --- 10.0.0.1 ping statistics --- 00:15:56.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.276 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.276 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3568464 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3568464 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3568464 ']' 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.277 23:57:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.277 [2024-05-14 23:57:56.729839] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:15:56.277 [2024-05-14 23:57:56.729886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.277 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.277 [2024-05-14 23:57:56.803636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.535 [2024-05-14 23:57:56.879766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.535 [2024-05-14 23:57:56.879799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.535 [2024-05-14 23:57:56.879809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.535 [2024-05-14 23:57:56.879817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.535 [2024-05-14 23:57:56.879823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.535 [2024-05-14 23:57:56.879868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.536 [2024-05-14 23:57:56.879964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.536 [2024-05-14 23:57:56.880048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.536 [2024-05-14 23:57:56.880050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 [2024-05-14 23:57:57.588101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 Malloc0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 [2024-05-14 23:57:57.642676] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:57.140 [2024-05-14 23:57:57.642931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:57.140 test case1: single bdev can't be used in multiple subsystems 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 [2024-05-14 23:57:57.666781] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:57.140 [2024-05-14 23:57:57.666802] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:57.140 [2024-05-14 23:57:57.666812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.140 request: 00:15:57.140 { 00:15:57.140 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:57.140 "namespace": { 00:15:57.140 "bdev_name": "Malloc0", 00:15:57.140 "no_auto_visible": false 00:15:57.140 }, 00:15:57.140 "method": "nvmf_subsystem_add_ns", 00:15:57.140 "req_id": 1 00:15:57.140 } 00:15:57.140 Got JSON-RPC error response 00:15:57.140 response: 00:15:57.140 { 00:15:57.140 "code": -32602, 00:15:57.140 "message": "Invalid parameters" 00:15:57.140 } 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:57.140 Adding namespace failed - expected result. 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:57.140 test case2: host connect to nvmf target in multiple paths 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 [2024-05-14 23:57:57.682946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.140 23:57:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.519 23:57:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:59.896 23:58:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.896 23:58:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:59.896 23:58:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.896 23:58:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:59.896 23:58:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:01.800 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:01.800 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:01.800 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.083 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:02.083 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.083 23:58:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:02.083 23:58:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:02.083 [global] 00:16:02.083 thread=1 00:16:02.083 invalidate=1 00:16:02.083 rw=write 00:16:02.083 time_based=1 00:16:02.083 runtime=1 00:16:02.083 ioengine=libaio 00:16:02.083 direct=1 00:16:02.083 bs=4096 00:16:02.083 iodepth=1 00:16:02.083 norandommap=0 00:16:02.083 numjobs=1 00:16:02.083 00:16:02.083 verify_dump=1 00:16:02.083 verify_backlog=512 00:16:02.083 verify_state_save=0 00:16:02.083 do_verify=1 00:16:02.083 verify=crc32c-intel 00:16:02.083 [job0] 00:16:02.083 filename=/dev/nvme0n1 00:16:02.084 Could not set queue depth (nvme0n1) 00:16:02.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.382 fio-3.35 00:16:02.382 Starting 1 thread 00:16:03.314 00:16:03.314 job0: (groupid=0, jobs=1): err= 0: pid=3569598: Tue May 14 23:58:03 2024 00:16:03.314 read: IOPS=1087, BW=4352KiB/s (4456kB/s)(4356KiB/1001msec) 00:16:03.314 slat (nsec): min=8673, max=21079, avg=9337.10, stdev=845.23 00:16:03.314 clat (usec): min=298, max=623, avg=507.78, stdev=51.84 00:16:03.314 lat (usec): min=307, max=632, avg=517.11, stdev=51.84 00:16:03.314 clat percentiles (usec): 00:16:03.314 | 1.00th=[ 314], 5.00th=[ 429], 10.00th=[ 457], 20.00th=[ 465], 00:16:03.314 | 30.00th=[ 490], 40.00th=[ 506], 50.00th=[ 523], 60.00th=[ 529], 00:16:03.314 | 70.00th=[ 537], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 586], 00:16:03.314 | 99.00th=[ 611], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 627], 00:16:03.314 | 99.99th=[ 627] 00:16:03.314 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:03.314 slat (usec): min=12, max=25378, avg=29.82, stdev=647.21 00:16:03.314 clat (usec): min=181, max=650, avg=249.18, stdev=60.00 00:16:03.314 lat (usec): min=205, max=25969, avg=279.00, stdev=658.63 00:16:03.314 clat percentiles (usec): 00:16:03.314 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:16:03.314 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 231], 00:16:03.314 | 70.00th=[ 241], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 388], 00:16:03.314 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 594], 99.95th=[ 652], 00:16:03.314 | 99.99th=[ 652] 00:16:03.314 bw ( KiB/s): min= 6272, max= 6272, per=100.00%, avg=6272.00, stdev= 0.00, samples=1 00:16:03.314 iops : min= 1568, max= 1568, avg=1568.00, stdev= 0.00, samples=1 00:16:03.314 lat (usec) : 250=43.31%, 500=30.51%, 750=26.17% 00:16:03.314 cpu : usr=2.80%, sys=4.20%, ctx=2627, majf=0, minf=2 00:16:03.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.314 issued rwts: total=1089,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.314 00:16:03.314 Run status group 0 (all jobs): 00:16:03.314 READ: bw=4352KiB/s (4456kB/s), 4352KiB/s-4352KiB/s (4456kB/s-4456kB/s), io=4356KiB (4461kB), run=1001-1001msec 00:16:03.314 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:16:03.314 00:16:03.314 Disk stats (read/write): 00:16:03.314 nvme0n1: ios=1052/1238, merge=0/0, ticks=1479/305, in_queue=1784, util=98.70% 00:16:03.314 23:58:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.572 rmmod nvme_tcp 00:16:03.572 rmmod nvme_fabrics 00:16:03.572 rmmod nvme_keyring 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3568464 ']' 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3568464 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3568464 ']' 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3568464 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:03.572 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3568464 00:16:03.830 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:03.830 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:03.830 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3568464' 00:16:03.830 killing process with pid 3568464 00:16:03.830 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3568464 00:16:03.830 [2024-05-14 23:58:04.207263] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:03.830 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3568464 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.089 23:58:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.984 23:58:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.984 00:16:05.984 real 0m16.686s 00:16:05.984 user 0m39.604s 00:16:05.984 sys 0m6.161s 00:16:05.984 23:58:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:05.984 23:58:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.984 ************************************ 00:16:05.984 END TEST nvmf_nmic 00:16:05.984 ************************************ 00:16:05.984 23:58:06 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:05.984 23:58:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:05.984 23:58:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:05.984 23:58:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.242 ************************************ 00:16:06.242 START TEST nvmf_fio_target 00:16:06.242 ************************************ 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:06.242 * Looking for test storage... 00:16:06.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.242 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.243 23:58:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.804 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:12.805 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:12.805 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:12.805 Found net devices under 0000:af:00.0: cvl_0_0 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:12.805 Found net devices under 0000:af:00.1: cvl_0_1 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.805 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:12.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:16:12.806 00:16:12.806 --- 10.0.0.2 ping statistics --- 00:16:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.806 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:16:12.806 00:16:12.806 --- 10.0.0.1 ping statistics --- 00:16:12.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.806 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.806 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.064 23:58:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3573413 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3573413 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3573413 ']' 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.065 23:58:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.065 [2024-05-14 23:58:13.458169] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:13.065 [2024-05-14 23:58:13.458229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.065 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.065 [2024-05-14 23:58:13.534497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.065 [2024-05-14 23:58:13.608154] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.065 [2024-05-14 23:58:13.608199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.065 [2024-05-14 23:58:13.608209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.065 [2024-05-14 23:58:13.608218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.065 [2024-05-14 23:58:13.608225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.065 [2024-05-14 23:58:13.608274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.065 [2024-05-14 23:58:13.608369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.065 [2024-05-14 23:58:13.608388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.065 [2024-05-14 23:58:13.608393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:13.998 [2024-05-14 23:58:14.457570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.998 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.256 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:14.256 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.514 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:14.514 23:58:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.514 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:14.514 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:14.772 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:14.772 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:15.030 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.288 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:15.288 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.288 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:15.288 23:58:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.547 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:15.547 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:15.804 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.064 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:16.064 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.064 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:16.064 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.322 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.580 [2024-05-14 23:58:16.930406] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:16.580 [2024-05-14 23:58:16.930714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.580 23:58:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:16.580 23:58:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:16.838 23:58:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:18.211 23:58:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:20.109 23:58:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:20.367 [global] 00:16:20.367 thread=1 00:16:20.367 invalidate=1 00:16:20.367 rw=write 00:16:20.367 time_based=1 00:16:20.367 runtime=1 00:16:20.367 ioengine=libaio 00:16:20.367 direct=1 00:16:20.367 bs=4096 00:16:20.367 iodepth=1 00:16:20.367 norandommap=0 00:16:20.367 numjobs=1 00:16:20.367 00:16:20.367 verify_dump=1 00:16:20.367 verify_backlog=512 00:16:20.367 verify_state_save=0 00:16:20.367 do_verify=1 00:16:20.367 verify=crc32c-intel 00:16:20.367 [job0] 00:16:20.367 filename=/dev/nvme0n1 00:16:20.367 [job1] 00:16:20.367 filename=/dev/nvme0n2 00:16:20.367 [job2] 00:16:20.367 filename=/dev/nvme0n3 00:16:20.367 [job3] 00:16:20.367 filename=/dev/nvme0n4 00:16:20.367 Could not set queue depth (nvme0n1) 00:16:20.367 Could not set queue depth (nvme0n2) 00:16:20.367 Could not set queue depth (nvme0n3) 00:16:20.367 Could not set queue depth (nvme0n4) 00:16:20.624 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.624 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.624 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.624 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.624 fio-3.35 00:16:20.624 Starting 4 threads 00:16:21.995 00:16:21.995 job0: (groupid=0, jobs=1): err= 0: pid=3574958: Tue May 14 23:58:22 2024 00:16:21.995 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:16:21.995 slat (nsec): min=11384, max=28453, avg=24416.29, stdev=3174.93 00:16:21.995 clat (usec): min=40937, max=42110, avg=41738.22, stdev=423.86 00:16:21.995 lat (usec): min=40963, max=42135, avg=41762.63, stdev=424.97 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:21.995 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:21.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:21.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:21.995 | 99.99th=[42206] 00:16:21.995 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:16:21.995 slat (nsec): min=12217, max=75499, avg=14018.63, stdev=3655.71 00:16:21.995 clat (usec): min=208, max=432, avg=261.81, stdev=40.53 00:16:21.995 lat (usec): min=222, max=474, avg=275.83, stdev=41.24 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:16:21.995 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 258], 00:16:21.995 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 359], 00:16:21.995 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 433], 99.95th=[ 433], 00:16:21.995 | 99.99th=[ 433] 00:16:21.995 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:16:21.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:21.995 lat (usec) : 250=52.35%, 500=43.71% 00:16:21.995 lat (msec) : 50=3.94% 00:16:21.995 cpu : usr=0.59%, sys=0.88%, ctx=534, majf=0, minf=1 00:16:21.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.995 job1: (groupid=0, jobs=1): err= 0: pid=3574959: Tue May 14 23:58:22 2024 00:16:21.995 read: IOPS=25, BW=101KiB/s (103kB/s)(104KiB/1030msec) 00:16:21.995 slat (nsec): min=8863, max=29539, avg=22013.62, stdev=6596.19 00:16:21.995 clat (usec): min=972, max=42119, avg=34532.53, stdev=15345.02 00:16:21.995 lat (usec): min=993, max=42130, avg=34554.54, stdev=15346.44 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 971], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[41157], 00:16:21.995 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:21.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:21.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:21.995 | 99.99th=[42206] 00:16:21.995 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:16:21.995 slat (usec): min=7, max=953, avg=15.84, stdev=59.48 00:16:21.995 clat (usec): min=168, max=514, avg=236.47, stdev=43.22 00:16:21.995 lat (usec): min=176, max=1165, avg=252.31, stdev=73.76 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:16:21.995 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:16:21.995 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 306], 00:16:21.995 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 515], 99.95th=[ 515], 00:16:21.995 | 99.99th=[ 515] 00:16:21.995 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:16:21.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:21.995 lat (usec) : 250=66.54%, 500=28.44%, 750=0.19%, 1000=0.19% 00:16:21.995 lat (msec) : 2=0.56%, 20=0.19%, 50=3.90% 00:16:21.995 cpu : usr=0.19%, sys=0.68%, ctx=543, majf=0, minf=1 00:16:21.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.995 job2: (groupid=0, jobs=1): err= 0: pid=3574960: Tue May 14 23:58:22 2024 00:16:21.995 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:16:21.995 slat (nsec): min=12840, max=26208, avg=23792.00, stdev=2698.51 00:16:21.995 clat (usec): min=40958, max=42069, avg=41756.67, stdev=383.49 00:16:21.995 lat (usec): min=40984, max=42092, avg=41780.46, stdev=383.90 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:21.995 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:21.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:21.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:21.995 | 99.99th=[42206] 00:16:21.995 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:16:21.995 slat (nsec): min=3849, max=68314, avg=11332.20, stdev=4561.85 00:16:21.995 clat (usec): min=206, max=804, avg=280.41, stdev=92.97 00:16:21.995 lat (usec): min=211, max=872, avg=291.74, stdev=92.96 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:16:21.995 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:16:21.995 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 453], 95.00th=[ 515], 00:16:21.995 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 807], 99.95th=[ 807], 00:16:21.995 | 99.99th=[ 807] 00:16:21.995 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:16:21.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:21.995 lat (usec) : 250=58.91%, 500=32.27%, 750=4.69%, 1000=0.19% 00:16:21.995 lat (msec) : 50=3.94% 00:16:21.995 cpu : usr=0.49%, sys=0.68%, ctx=534, majf=0, minf=1 00:16:21.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.995 job3: (groupid=0, jobs=1): err= 0: pid=3574961: Tue May 14 23:58:22 2024 00:16:21.995 read: IOPS=505, BW=2023KiB/s (2072kB/s)(2096KiB/1036msec) 00:16:21.995 slat (nsec): min=9307, max=43906, avg=10409.54, stdev=2870.49 00:16:21.995 clat (usec): min=325, max=41975, avg=1389.18, stdev=6190.80 00:16:21.995 lat (usec): min=335, max=42000, avg=1399.59, stdev=6193.00 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 375], 20.00th=[ 392], 00:16:21.995 | 30.00th=[ 424], 40.00th=[ 441], 50.00th=[ 465], 60.00th=[ 469], 00:16:21.995 | 70.00th=[ 478], 80.00th=[ 486], 90.00th=[ 490], 95.00th=[ 498], 00:16:21.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:21.995 | 99.99th=[42206] 00:16:21.995 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:16:21.995 slat (nsec): min=5665, max=40662, avg=13455.13, stdev=2594.47 00:16:21.995 clat (usec): min=188, max=1235, avg=276.56, stdev=82.30 00:16:21.995 lat (usec): min=200, max=1248, avg=290.02, stdev=82.36 00:16:21.995 clat percentiles (usec): 00:16:21.995 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 227], 00:16:21.995 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 262], 00:16:21.995 | 70.00th=[ 281], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 449], 00:16:21.995 | 99.00th=[ 562], 99.50th=[ 627], 99.90th=[ 881], 99.95th=[ 1237], 00:16:21.995 | 99.99th=[ 1237] 00:16:21.995 bw ( KiB/s): min= 1224, max= 6968, per=41.44%, avg=4096.00, stdev=4061.62, samples=2 00:16:21.995 iops : min= 306, max= 1742, avg=1024.00, stdev=1015.41, samples=2 00:16:21.995 lat (usec) : 250=33.85%, 500=63.63%, 750=1.55%, 1000=0.13% 00:16:21.995 lat (msec) : 2=0.06%, 50=0.78% 00:16:21.995 cpu : usr=1.45%, sys=2.32%, ctx=1549, majf=0, minf=2 00:16:21.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.995 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.995 00:16:21.995 Run status group 0 (all jobs): 00:16:21.995 READ: bw=2286KiB/s (2341kB/s), 81.7KiB/s-2023KiB/s (83.7kB/s-2072kB/s), io=2368KiB (2425kB), run=1021-1036msec 00:16:21.995 WRITE: bw=9884KiB/s (10.1MB/s), 1988KiB/s-3954KiB/s (2036kB/s-4049kB/s), io=10.0MiB (10.5MB), run=1021-1036msec 00:16:21.995 00:16:21.995 Disk stats (read/write): 00:16:21.995 nvme0n1: ios=38/512, merge=0/0, ticks=1503/129, in_queue=1632, util=83.17% 00:16:21.995 nvme0n2: ios=76/512, merge=0/0, ticks=818/118, in_queue=936, util=90.69% 00:16:21.995 nvme0n3: ios=73/512, merge=0/0, ticks=789/144, in_queue=933, util=94.79% 00:16:21.995 nvme0n4: ios=542/1024, merge=0/0, ticks=1396/275, in_queue=1671, util=94.17% 00:16:21.996 23:58:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:21.996 [global] 00:16:21.996 thread=1 00:16:21.996 invalidate=1 00:16:21.996 rw=randwrite 00:16:21.996 time_based=1 00:16:21.996 runtime=1 00:16:21.996 ioengine=libaio 00:16:21.996 direct=1 00:16:21.996 bs=4096 00:16:21.996 iodepth=1 00:16:21.996 norandommap=0 00:16:21.996 numjobs=1 00:16:21.996 00:16:21.996 verify_dump=1 00:16:21.996 verify_backlog=512 00:16:21.996 verify_state_save=0 00:16:21.996 do_verify=1 00:16:21.996 verify=crc32c-intel 00:16:21.996 [job0] 00:16:21.996 filename=/dev/nvme0n1 00:16:21.996 [job1] 00:16:21.996 filename=/dev/nvme0n2 00:16:21.996 [job2] 00:16:21.996 filename=/dev/nvme0n3 00:16:21.996 [job3] 00:16:21.996 filename=/dev/nvme0n4 00:16:21.996 Could not set queue depth (nvme0n1) 00:16:21.996 Could not set queue depth (nvme0n2) 00:16:21.996 Could not set queue depth (nvme0n3) 00:16:21.996 Could not set queue depth (nvme0n4) 00:16:22.252 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.252 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.252 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.252 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.252 fio-3.35 00:16:22.252 Starting 4 threads 00:16:23.657 00:16:23.657 job0: (groupid=0, jobs=1): err= 0: pid=3575386: Tue May 14 23:58:24 2024 00:16:23.657 read: IOPS=1184, BW=4739KiB/s (4853kB/s)(4744KiB/1001msec) 00:16:23.657 slat (nsec): min=8669, max=23413, avg=9338.48, stdev=903.94 00:16:23.657 clat (usec): min=355, max=565, avg=495.99, stdev=37.20 00:16:23.657 lat (usec): min=365, max=574, avg=505.33, stdev=37.18 00:16:23.657 clat percentiles (usec): 00:16:23.657 | 1.00th=[ 375], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 478], 00:16:23.657 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 510], 00:16:23.657 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 545], 00:16:23.657 | 99.00th=[ 553], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 570], 00:16:23.657 | 99.99th=[ 570] 00:16:23.657 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:23.657 slat (nsec): min=7360, max=29776, avg=12404.90, stdev=1650.06 00:16:23.657 clat (usec): min=188, max=1537, avg=243.55, stdev=63.58 00:16:23.657 lat (usec): min=200, max=1549, avg=255.96, stdev=63.46 00:16:23.657 clat percentiles (usec): 00:16:23.657 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:16:23.658 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:16:23.658 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:16:23.658 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 1516], 99.95th=[ 1532], 00:16:23.658 | 99.99th=[ 1532] 00:16:23.658 bw ( KiB/s): min= 6744, max= 6744, per=37.54%, avg=6744.00, stdev= 0.00, samples=1 00:16:23.658 iops : min= 1686, max= 1686, avg=1686.00, stdev= 0.00, samples=1 00:16:23.658 lat (usec) : 250=36.66%, 500=41.04%, 750=22.23% 00:16:23.658 lat (msec) : 2=0.07% 00:16:23.658 cpu : usr=2.40%, sys=4.80%, ctx=2722, majf=0, minf=2 00:16:23.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 issued rwts: total=1186,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.658 job1: (groupid=0, jobs=1): err= 0: pid=3575388: Tue May 14 23:58:24 2024 00:16:23.658 read: IOPS=19, BW=78.0KiB/s (79.8kB/s)(80.0KiB/1026msec) 00:16:23.658 slat (nsec): min=11139, max=25350, avg=24041.05, stdev=3070.99 00:16:23.658 clat (usec): min=40948, max=43013, avg=41898.46, stdev=421.29 00:16:23.658 lat (usec): min=40973, max=43038, avg=41922.50, stdev=421.89 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:23.658 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:23.658 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:23.658 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:23.658 | 99.99th=[43254] 00:16:23.658 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:23.658 slat (nsec): min=7373, max=39927, avg=12644.67, stdev=2956.99 00:16:23.658 clat (usec): min=210, max=679, avg=350.21, stdev=94.89 00:16:23.658 lat (usec): min=221, max=697, avg=362.86, stdev=96.07 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 243], 00:16:23.658 | 30.00th=[ 273], 40.00th=[ 314], 50.00th=[ 355], 60.00th=[ 388], 00:16:23.658 | 70.00th=[ 416], 80.00th=[ 457], 90.00th=[ 465], 95.00th=[ 469], 00:16:23.658 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 676], 00:16:23.658 | 99.99th=[ 676] 00:16:23.658 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:23.658 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:23.658 lat (usec) : 250=22.18%, 500=72.37%, 750=1.69% 00:16:23.658 lat (msec) : 50=3.76% 00:16:23.658 cpu : usr=0.59%, sys=0.49%, ctx=533, majf=0, minf=1 00:16:23.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.658 job2: (groupid=0, jobs=1): err= 0: pid=3575391: Tue May 14 23:58:24 2024 00:16:23.658 read: IOPS=1105, BW=4424KiB/s (4530kB/s)(4428KiB/1001msec) 00:16:23.658 slat (nsec): min=9006, max=44303, avg=10089.89, stdev=1874.58 00:16:23.658 clat (usec): min=326, max=633, avg=497.28, stdev=47.20 00:16:23.658 lat (usec): min=336, max=643, avg=507.37, stdev=47.25 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[ 343], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 482], 00:16:23.658 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 510], 60.00th=[ 515], 00:16:23.658 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 545], 00:16:23.658 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 611], 99.95th=[ 635], 00:16:23.658 | 99.99th=[ 635] 00:16:23.658 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:23.658 slat (nsec): min=9759, max=33148, avg=13790.41, stdev=2101.89 00:16:23.658 clat (usec): min=184, max=2290, avg=266.63, stdev=123.23 00:16:23.658 lat (usec): min=199, max=2304, avg=280.42, stdev=123.62 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:16:23.658 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:16:23.658 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 396], 00:16:23.658 | 99.00th=[ 449], 99.50th=[ 529], 99.90th=[ 2212], 99.95th=[ 2278], 00:16:23.658 | 99.99th=[ 2278] 00:16:23.658 bw ( KiB/s): min= 6248, max= 6248, per=34.78%, avg=6248.00, stdev= 0.00, samples=1 00:16:23.658 iops : min= 1562, max= 1562, avg=1562.00, stdev= 0.00, samples=1 00:16:23.658 lat (usec) : 250=37.31%, 500=37.61%, 750=24.82% 00:16:23.658 lat (msec) : 2=0.15%, 4=0.11% 00:16:23.658 cpu : usr=3.10%, sys=4.20%, ctx=2646, majf=0, minf=1 00:16:23.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 issued rwts: total=1107,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.658 job3: (groupid=0, jobs=1): err= 0: pid=3575392: Tue May 14 23:58:24 2024 00:16:23.658 read: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec) 00:16:23.658 slat (nsec): min=4038, max=22892, avg=5571.43, stdev=1211.12 00:16:23.658 clat (usec): min=510, max=1046, avg=728.25, stdev=92.11 00:16:23.658 lat (usec): min=516, max=1053, avg=733.82, stdev=92.56 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 635], 20.00th=[ 668], 00:16:23.658 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 734], 00:16:23.658 | 70.00th=[ 758], 80.00th=[ 807], 90.00th=[ 881], 95.00th=[ 898], 00:16:23.658 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1045], 99.95th=[ 1045], 00:16:23.658 | 99.99th=[ 1045] 00:16:23.658 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:23.658 slat (nsec): min=4326, max=55506, avg=7210.91, stdev=2407.77 00:16:23.658 clat (usec): min=194, max=1932, avg=422.97, stdev=159.69 00:16:23.658 lat (usec): min=199, max=1939, avg=430.18, stdev=160.99 00:16:23.658 clat percentiles (usec): 00:16:23.658 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 269], 00:16:23.658 | 30.00th=[ 326], 40.00th=[ 396], 50.00th=[ 478], 60.00th=[ 490], 00:16:23.658 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 545], 00:16:23.658 | 99.00th=[ 889], 99.50th=[ 1336], 99.90th=[ 1926], 99.95th=[ 1926], 00:16:23.658 | 99.99th=[ 1926] 00:16:23.658 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:23.658 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:23.658 lat (usec) : 250=9.09%, 500=33.43%, 750=42.96%, 1000=13.97% 00:16:23.658 lat (msec) : 2=0.56% 00:16:23.658 cpu : usr=1.20%, sys=1.40%, ctx=1783, majf=0, minf=1 00:16:23.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.658 issued rwts: total=759,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.658 00:16:23.658 Run status group 0 (all jobs): 00:16:23.658 READ: bw=11.7MiB/s (12.3MB/s), 78.0KiB/s-4739KiB/s (79.8kB/s-4853kB/s), io=12.0MiB (12.6MB), run=1001-1026msec 00:16:23.658 WRITE: bw=17.5MiB/s (18.4MB/s), 1996KiB/s-6138KiB/s (2044kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1026msec 00:16:23.658 00:16:23.658 Disk stats (read/write): 00:16:23.658 nvme0n1: ios=1074/1109, merge=0/0, ticks=535/270, in_queue=805, util=84.47% 00:16:23.658 nvme0n2: ios=30/512, merge=0/0, ticks=652/170, in_queue=822, util=84.92% 00:16:23.658 nvme0n3: ios=1059/1024, merge=0/0, ticks=1665/262, in_queue=1927, util=97.22% 00:16:23.658 nvme0n4: ios=512/1014, merge=0/0, ticks=374/419, in_queue=793, util=89.27% 00:16:23.658 23:58:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:23.658 [global] 00:16:23.658 thread=1 00:16:23.658 invalidate=1 00:16:23.658 rw=write 00:16:23.658 time_based=1 00:16:23.658 runtime=1 00:16:23.658 ioengine=libaio 00:16:23.658 direct=1 00:16:23.658 bs=4096 00:16:23.658 iodepth=128 00:16:23.658 norandommap=0 00:16:23.658 numjobs=1 00:16:23.658 00:16:23.658 verify_dump=1 00:16:23.658 verify_backlog=512 00:16:23.658 verify_state_save=0 00:16:23.658 do_verify=1 00:16:23.658 verify=crc32c-intel 00:16:23.658 [job0] 00:16:23.658 filename=/dev/nvme0n1 00:16:23.658 [job1] 00:16:23.658 filename=/dev/nvme0n2 00:16:23.658 [job2] 00:16:23.658 filename=/dev/nvme0n3 00:16:23.658 [job3] 00:16:23.658 filename=/dev/nvme0n4 00:16:23.658 Could not set queue depth (nvme0n1) 00:16:23.658 Could not set queue depth (nvme0n2) 00:16:23.658 Could not set queue depth (nvme0n3) 00:16:23.658 Could not set queue depth (nvme0n4) 00:16:23.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.916 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.916 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.916 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:23.916 fio-3.35 00:16:23.916 Starting 4 threads 00:16:25.292 00:16:25.292 job0: (groupid=0, jobs=1): err= 0: pid=3575807: Tue May 14 23:58:25 2024 00:16:25.292 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:16:25.292 slat (usec): min=2, max=15999, avg=123.26, stdev=892.90 00:16:25.292 clat (usec): min=5212, max=47268, avg=17320.83, stdev=9849.96 00:16:25.292 lat (usec): min=5221, max=47281, avg=17444.09, stdev=9909.08 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 5538], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10290], 00:16:25.292 | 30.00th=[10552], 40.00th=[11994], 50.00th=[13173], 60.00th=[15008], 00:16:25.292 | 70.00th=[17433], 80.00th=[27132], 90.00th=[34866], 95.00th=[36963], 00:16:25.292 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[46924], 00:16:25.292 | 99.99th=[47449] 00:16:25.292 write: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1009msec); 0 zone resets 00:16:25.292 slat (usec): min=2, max=57013, avg=221.64, stdev=2489.88 00:16:25.292 clat (msec): min=3, max=282, avg=20.67, stdev=24.75 00:16:25.292 lat (msec): min=3, max=282, avg=20.89, stdev=25.23 00:16:25.292 clat percentiles (msec): 00:16:25.292 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:16:25.292 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:16:25.292 | 70.00th=[ 17], 80.00th=[ 27], 90.00th=[ 45], 95.00th=[ 54], 00:16:25.292 | 99.00th=[ 159], 99.50th=[ 215], 99.90th=[ 284], 99.95th=[ 284], 00:16:25.292 | 99.99th=[ 284] 00:16:25.292 bw ( KiB/s): min= 8128, max=14328, per=16.92%, avg=11228.00, stdev=4384.06, samples=2 00:16:25.292 iops : min= 2032, max= 3582, avg=2807.00, stdev=1096.02, samples=2 00:16:25.292 lat (msec) : 4=0.09%, 10=22.19%, 20=51.16%, 50=21.95%, 100=3.44% 00:16:25.292 lat (msec) : 250=1.04%, 500=0.13% 00:16:25.292 cpu : usr=2.88%, sys=4.86%, ctx=196, majf=0, minf=1 00:16:25.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:25.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.292 issued rwts: total=2560,2934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.292 job1: (groupid=0, jobs=1): err= 0: pid=3575811: Tue May 14 23:58:25 2024 00:16:25.292 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:25.292 slat (usec): min=2, max=12004, avg=92.25, stdev=665.34 00:16:25.292 clat (usec): min=3509, max=37876, avg=12593.52, stdev=5118.38 00:16:25.292 lat (usec): min=3517, max=37889, avg=12685.76, stdev=5155.17 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 8717], 00:16:25.292 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11338], 60.00th=[12649], 00:16:25.292 | 70.00th=[13566], 80.00th=[14484], 90.00th=[18482], 95.00th=[23200], 00:16:25.292 | 99.00th=[32900], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:16:25.292 | 99.99th=[38011] 00:16:25.292 write: IOPS=5139, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec); 0 zone resets 00:16:25.292 slat (usec): min=3, max=10984, avg=91.60, stdev=545.59 00:16:25.292 clat (usec): min=1322, max=31238, avg=12160.50, stdev=5119.28 00:16:25.292 lat (usec): min=1339, max=31250, avg=12252.09, stdev=5134.94 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 7635], 00:16:25.292 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[12256], 00:16:25.292 | 70.00th=[14615], 80.00th=[16909], 90.00th=[19268], 95.00th=[21103], 00:16:25.292 | 99.00th=[26608], 99.50th=[27919], 99.90th=[30016], 99.95th=[31327], 00:16:25.292 | 99.99th=[31327] 00:16:25.292 bw ( KiB/s): min=16392, max=24568, per=30.86%, avg=20480.00, stdev=5781.31, samples=2 00:16:25.292 iops : min= 4098, max= 6142, avg=5120.00, stdev=1445.33, samples=2 00:16:25.292 lat (msec) : 2=0.17%, 4=0.16%, 10=38.45%, 20=53.39%, 50=7.83% 00:16:25.292 cpu : usr=5.18%, sys=7.48%, ctx=419, majf=0, minf=1 00:16:25.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:25.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.292 issued rwts: total=5120,5160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.292 job2: (groupid=0, jobs=1): err= 0: pid=3575819: Tue May 14 23:58:25 2024 00:16:25.292 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:25.292 slat (usec): min=2, max=9989, avg=84.54, stdev=595.70 00:16:25.292 clat (usec): min=4904, max=20448, avg=11566.44, stdev=2574.10 00:16:25.292 lat (usec): min=4910, max=24796, avg=11650.98, stdev=2616.43 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 7373], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9634], 00:16:25.292 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:16:25.292 | 70.00th=[12387], 80.00th=[13960], 90.00th=[15139], 95.00th=[16450], 00:16:25.292 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:16:25.292 | 99.99th=[20579] 00:16:25.292 write: IOPS=5551, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1004msec); 0 zone resets 00:16:25.292 slat (usec): min=3, max=7247, avg=94.43, stdev=439.94 00:16:25.292 clat (usec): min=1906, max=21653, avg=12179.16, stdev=3574.68 00:16:25.292 lat (usec): min=1984, max=21667, avg=12273.60, stdev=3579.75 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 3982], 5.00th=[ 6849], 10.00th=[ 8225], 20.00th=[ 8979], 00:16:25.292 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[12125], 60.00th=[13304], 00:16:25.292 | 70.00th=[14353], 80.00th=[15270], 90.00th=[16909], 95.00th=[17957], 00:16:25.292 | 99.00th=[20317], 99.50th=[20317], 99.90th=[21365], 99.95th=[21627], 00:16:25.292 | 99.99th=[21627] 00:16:25.292 bw ( KiB/s): min=20480, max=23096, per=32.83%, avg=21788.00, stdev=1849.79, samples=2 00:16:25.292 iops : min= 5120, max= 5774, avg=5447.00, stdev=462.45, samples=2 00:16:25.292 lat (msec) : 2=0.04%, 4=0.49%, 10=30.44%, 20=68.00%, 50=1.04% 00:16:25.292 cpu : usr=5.78%, sys=5.88%, ctx=570, majf=0, minf=1 00:16:25.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:25.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.292 issued rwts: total=5120,5574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.292 job3: (groupid=0, jobs=1): err= 0: pid=3575824: Tue May 14 23:58:25 2024 00:16:25.292 read: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1005msec) 00:16:25.292 slat (usec): min=2, max=27332, avg=171.28, stdev=1195.32 00:16:25.292 clat (usec): min=4532, max=82584, avg=21529.90, stdev=15359.37 00:16:25.292 lat (usec): min=5967, max=82594, avg=21701.18, stdev=15486.27 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 6390], 5.00th=[10159], 10.00th=[11076], 20.00th=[11600], 00:16:25.292 | 30.00th=[12387], 40.00th=[12780], 50.00th=[14091], 60.00th=[16450], 00:16:25.292 | 70.00th=[18744], 80.00th=[31851], 90.00th=[49021], 95.00th=[55313], 00:16:25.292 | 99.00th=[74974], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:16:25.292 | 99.99th=[82314] 00:16:25.292 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:16:25.292 slat (usec): min=3, max=12864, avg=161.30, stdev=837.51 00:16:25.292 clat (usec): min=9100, max=82571, avg=21545.53, stdev=12555.51 00:16:25.292 lat (usec): min=9117, max=82585, avg=21706.83, stdev=12624.64 00:16:25.292 clat percentiles (usec): 00:16:25.292 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[11469], 20.00th=[11994], 00:16:25.292 | 30.00th=[12518], 40.00th=[15008], 50.00th=[17171], 60.00th=[19530], 00:16:25.292 | 70.00th=[22938], 80.00th=[30278], 90.00th=[40633], 95.00th=[51119], 00:16:25.292 | 99.00th=[61080], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:16:25.292 | 99.99th=[82314] 00:16:25.292 bw ( KiB/s): min= 8192, max=16384, per=18.52%, avg=12288.00, stdev=5792.62, samples=2 00:16:25.292 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:16:25.292 lat (msec) : 10=2.58%, 20=64.30%, 50=25.58%, 100=7.53% 00:16:25.292 cpu : usr=2.69%, sys=4.58%, ctx=355, majf=0, minf=1 00:16:25.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:25.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.292 issued rwts: total=2811,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.292 00:16:25.292 Run status group 0 (all jobs): 00:16:25.292 READ: bw=60.4MiB/s (63.4MB/s), 9.91MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=61.0MiB (63.9MB), run=1004-1009msec 00:16:25.292 WRITE: bw=64.8MiB/s (68.0MB/s), 11.4MiB/s-21.7MiB/s (11.9MB/s-22.7MB/s), io=65.4MiB (68.6MB), run=1004-1009msec 00:16:25.292 00:16:25.292 Disk stats (read/write): 00:16:25.292 nvme0n1: ios=2075/2487, merge=0/0, ticks=26556/28378, in_queue=54934, util=97.99% 00:16:25.292 nvme0n2: ios=4129/4368, merge=0/0, ticks=47271/52111, in_queue=99382, util=96.87% 00:16:25.292 nvme0n3: ios=4116/4370, merge=0/0, ticks=45768/52904, in_queue=98672, util=98.15% 00:16:25.292 nvme0n4: ios=2376/2560, merge=0/0, ticks=15339/17553, in_queue=32892, util=92.05% 00:16:25.292 23:58:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:25.292 [global] 00:16:25.292 thread=1 00:16:25.292 invalidate=1 00:16:25.292 rw=randwrite 00:16:25.292 time_based=1 00:16:25.292 runtime=1 00:16:25.292 ioengine=libaio 00:16:25.292 direct=1 00:16:25.292 bs=4096 00:16:25.292 iodepth=128 00:16:25.292 norandommap=0 00:16:25.292 numjobs=1 00:16:25.292 00:16:25.292 verify_dump=1 00:16:25.292 verify_backlog=512 00:16:25.292 verify_state_save=0 00:16:25.292 do_verify=1 00:16:25.292 verify=crc32c-intel 00:16:25.292 [job0] 00:16:25.292 filename=/dev/nvme0n1 00:16:25.292 [job1] 00:16:25.293 filename=/dev/nvme0n2 00:16:25.293 [job2] 00:16:25.293 filename=/dev/nvme0n3 00:16:25.293 [job3] 00:16:25.293 filename=/dev/nvme0n4 00:16:25.293 Could not set queue depth (nvme0n1) 00:16:25.293 Could not set queue depth (nvme0n2) 00:16:25.293 Could not set queue depth (nvme0n3) 00:16:25.293 Could not set queue depth (nvme0n4) 00:16:25.550 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.550 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.550 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.550 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.550 fio-3.35 00:16:25.550 Starting 4 threads 00:16:26.925 00:16:26.925 job0: (groupid=0, jobs=1): err= 0: pid=3576246: Tue May 14 23:58:27 2024 00:16:26.925 read: IOPS=2805, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1007msec) 00:16:26.925 slat (usec): min=2, max=29183, avg=149.80, stdev=1131.97 00:16:26.925 clat (usec): min=5855, max=64681, avg=19718.87, stdev=9470.99 00:16:26.925 lat (usec): min=5865, max=64685, avg=19868.66, stdev=9563.74 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12911], 00:16:26.925 | 30.00th=[13960], 40.00th=[15008], 50.00th=[15926], 60.00th=[18744], 00:16:26.925 | 70.00th=[21103], 80.00th=[24773], 90.00th=[31589], 95.00th=[42730], 00:16:26.925 | 99.00th=[53740], 99.50th=[62129], 99.90th=[64750], 99.95th=[64750], 00:16:26.925 | 99.99th=[64750] 00:16:26.925 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:16:26.925 slat (usec): min=3, max=26028, avg=178.28, stdev=1039.58 00:16:26.925 clat (msec): min=3, max=116, avg=23.33, stdev=25.74 00:16:26.925 lat (msec): min=4, max=116, avg=23.50, stdev=25.91 00:16:26.925 clat percentiles (msec): 00:16:26.925 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:16:26.925 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:16:26.925 | 70.00th=[ 17], 80.00th=[ 25], 90.00th=[ 63], 95.00th=[ 97], 00:16:26.925 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 116], 99.95th=[ 116], 00:16:26.925 | 99.99th=[ 116] 00:16:26.925 bw ( KiB/s): min= 7208, max=17368, per=18.88%, avg=12288.00, stdev=7184.20, samples=2 00:16:26.925 iops : min= 1802, max= 4342, avg=3072.00, stdev=1796.05, samples=2 00:16:26.925 lat (msec) : 4=0.02%, 10=12.19%, 20=58.10%, 50=22.50%, 100=4.85% 00:16:26.925 lat (msec) : 250=2.34% 00:16:26.925 cpu : usr=3.88%, sys=4.37%, ctx=295, majf=0, minf=1 00:16:26.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:26.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.925 issued rwts: total=2825,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.925 job1: (groupid=0, jobs=1): err= 0: pid=3576253: Tue May 14 23:58:27 2024 00:16:26.925 read: IOPS=4145, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec) 00:16:26.925 slat (usec): min=2, max=21137, avg=119.68, stdev=814.30 00:16:26.925 clat (usec): min=585, max=73376, avg=15534.20, stdev=8832.20 00:16:26.925 lat (usec): min=2310, max=73387, avg=15653.88, stdev=8902.05 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 2737], 5.00th=[ 7439], 10.00th=[ 9503], 20.00th=[10552], 00:16:26.925 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[13304], 00:16:26.925 | 70.00th=[16450], 80.00th=[19530], 90.00th=[25297], 95.00th=[31589], 00:16:26.925 | 99.00th=[54264], 99.50th=[63701], 99.90th=[66323], 99.95th=[72877], 00:16:26.925 | 99.99th=[72877] 00:16:26.925 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:26.925 slat (usec): min=2, max=13599, avg=101.42, stdev=585.07 00:16:26.925 clat (usec): min=1915, max=73347, avg=13560.29, stdev=6083.74 00:16:26.925 lat (usec): min=1931, max=73352, avg=13661.72, stdev=6085.17 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:16:26.925 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11600], 60.00th=[12649], 00:16:26.925 | 70.00th=[14222], 80.00th=[17171], 90.00th=[19268], 95.00th=[22414], 00:16:26.925 | 99.00th=[42206], 99.50th=[45351], 99.90th=[54264], 99.95th=[72877], 00:16:26.925 | 99.99th=[72877] 00:16:26.925 bw ( KiB/s): min=14685, max=21656, per=27.92%, avg=18170.50, stdev=4929.24, samples=2 00:16:26.925 iops : min= 3671, max= 5414, avg=4542.50, stdev=1232.49, samples=2 00:16:26.925 lat (usec) : 750=0.01% 00:16:26.925 lat (msec) : 2=0.02%, 4=1.18%, 10=16.27%, 20=69.40%, 50=12.33% 00:16:26.925 lat (msec) : 100=0.79% 00:16:26.925 cpu : usr=4.10%, sys=5.09%, ctx=509, majf=0, minf=1 00:16:26.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:26.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.925 issued rwts: total=4154,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.925 job2: (groupid=0, jobs=1): err= 0: pid=3576274: Tue May 14 23:58:27 2024 00:16:26.925 read: IOPS=4827, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1007msec) 00:16:26.925 slat (usec): min=2, max=14743, avg=99.00, stdev=753.42 00:16:26.925 clat (usec): min=3825, max=36242, avg=13503.27, stdev=3987.94 00:16:26.925 lat (usec): min=5943, max=36265, avg=13602.27, stdev=4037.37 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10421], 00:16:26.925 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[13435], 00:16:26.925 | 70.00th=[14877], 80.00th=[16909], 90.00th=[18744], 95.00th=[21103], 00:16:26.925 | 99.00th=[25560], 99.50th=[26084], 99.90th=[29230], 99.95th=[29230], 00:16:26.925 | 99.99th=[36439] 00:16:26.925 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:26.925 slat (usec): min=2, max=12402, avg=89.65, stdev=555.51 00:16:26.925 clat (usec): min=942, max=30755, avg=12118.80, stdev=4356.04 00:16:26.925 lat (usec): min=956, max=30759, avg=12208.45, stdev=4358.27 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 3261], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 8455], 00:16:26.925 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11600], 60.00th=[12256], 00:16:26.925 | 70.00th=[14091], 80.00th=[15533], 90.00th=[17695], 95.00th=[19530], 00:16:26.925 | 99.00th=[25560], 99.50th=[27657], 99.90th=[30016], 99.95th=[30802], 00:16:26.925 | 99.99th=[30802] 00:16:26.925 bw ( KiB/s): min=20488, max=20513, per=31.50%, avg=20500.50, stdev=17.68, samples=2 00:16:26.925 iops : min= 5122, max= 5128, avg=5125.00, stdev= 4.24, samples=2 00:16:26.925 lat (usec) : 1000=0.03% 00:16:26.925 lat (msec) : 2=0.35%, 4=0.50%, 10=23.12%, 20=70.82%, 50=5.17% 00:16:26.925 cpu : usr=4.57%, sys=7.85%, ctx=511, majf=0, minf=1 00:16:26.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:26.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.925 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.925 job3: (groupid=0, jobs=1): err= 0: pid=3576282: Tue May 14 23:58:27 2024 00:16:26.925 read: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1006msec) 00:16:26.925 slat (usec): min=2, max=15498, avg=120.48, stdev=798.25 00:16:26.925 clat (usec): min=5387, max=49241, avg=15608.00, stdev=8498.58 00:16:26.925 lat (usec): min=6950, max=49261, avg=15728.48, stdev=8578.01 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:16:26.925 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11207], 60.00th=[13960], 00:16:26.925 | 70.00th=[15664], 80.00th=[21365], 90.00th=[28181], 95.00th=[33817], 00:16:26.925 | 99.00th=[43254], 99.50th=[45351], 99.90th=[48497], 99.95th=[49021], 00:16:26.925 | 99.99th=[49021] 00:16:26.925 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:16:26.925 slat (usec): min=2, max=17474, avg=167.49, stdev=870.39 00:16:26.925 clat (usec): min=1399, max=93695, avg=22035.34, stdev=15430.98 00:16:26.925 lat (usec): min=1413, max=93711, avg=22202.84, stdev=15518.56 00:16:26.925 clat percentiles (usec): 00:16:26.925 | 1.00th=[ 5276], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[12780], 00:16:26.925 | 30.00th=[14353], 40.00th=[15664], 50.00th=[17171], 60.00th=[20317], 00:16:26.925 | 70.00th=[22152], 80.00th=[25297], 90.00th=[39060], 95.00th=[56886], 00:16:26.925 | 99.00th=[90702], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:16:26.925 | 99.99th=[93848] 00:16:26.925 bw ( KiB/s): min= 7560, max=20513, per=21.57%, avg=14036.50, stdev=9159.15, samples=2 00:16:26.925 iops : min= 1890, max= 5128, avg=3509.00, stdev=2289.61, samples=2 00:16:26.925 lat (msec) : 2=0.03%, 4=0.18%, 10=15.05%, 20=52.29%, 50=29.02% 00:16:26.925 lat (msec) : 100=3.43% 00:16:26.925 cpu : usr=3.18%, sys=3.88%, ctx=554, majf=0, minf=1 00:16:26.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:26.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.925 issued rwts: total=3119,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.925 00:16:26.925 Run status group 0 (all jobs): 00:16:26.925 READ: bw=58.0MiB/s (60.8MB/s), 11.0MiB/s-18.9MiB/s (11.5MB/s-19.8MB/s), io=58.4MiB (61.3MB), run=1002-1007msec 00:16:26.925 WRITE: bw=63.6MiB/s (66.6MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.8MB/s), io=64.0MiB (67.1MB), run=1002-1007msec 00:16:26.925 00:16:26.925 Disk stats (read/write): 00:16:26.925 nvme0n1: ios=2071/2055, merge=0/0, ticks=40875/58645, in_queue=99520, util=99.00% 00:16:26.925 nvme0n2: ios=3330/3584, merge=0/0, ticks=34418/32012, in_queue=66430, util=84.33% 00:16:26.925 nvme0n3: ios=3911/4096, merge=0/0, ticks=53027/45456, in_queue=98483, util=87.96% 00:16:26.925 nvme0n4: ios=3072/3151, merge=0/0, ticks=18770/23219, in_queue=41989, util=89.20% 00:16:26.925 23:58:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:26.925 23:58:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3576500 00:16:26.925 23:58:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:26.925 23:58:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:26.925 [global] 00:16:26.925 thread=1 00:16:26.925 invalidate=1 00:16:26.925 rw=read 00:16:26.925 time_based=1 00:16:26.925 runtime=10 00:16:26.925 ioengine=libaio 00:16:26.925 direct=1 00:16:26.925 bs=4096 00:16:26.925 iodepth=1 00:16:26.925 norandommap=1 00:16:26.925 numjobs=1 00:16:26.925 00:16:26.925 [job0] 00:16:26.925 filename=/dev/nvme0n1 00:16:26.925 [job1] 00:16:26.925 filename=/dev/nvme0n2 00:16:26.925 [job2] 00:16:26.925 filename=/dev/nvme0n3 00:16:26.925 [job3] 00:16:26.925 filename=/dev/nvme0n4 00:16:26.925 Could not set queue depth (nvme0n1) 00:16:26.925 Could not set queue depth (nvme0n2) 00:16:26.925 Could not set queue depth (nvme0n3) 00:16:26.925 Could not set queue depth (nvme0n4) 00:16:27.184 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.184 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.184 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.184 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.184 fio-3.35 00:16:27.184 Starting 4 threads 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:30.476 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=905216, buflen=4096 00:16:30.476 fio: pid=3576734, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:30.476 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1966080, buflen=4096 00:16:30.476 fio: pid=3576726, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.476 23:58:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:30.476 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=290816, buflen=4096 00:16:30.476 fio: pid=3576683, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.735 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.735 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:30.735 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3411968, buflen=4096 00:16:30.735 fio: pid=3576704, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.735 00:16:30.735 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576683: Tue May 14 23:58:31 2024 00:16:30.735 read: IOPS=23, BW=94.3KiB/s (96.6kB/s)(284KiB/3012msec) 00:16:30.735 slat (usec): min=22, max=13693, avg=215.80, stdev=1610.75 00:16:30.735 clat (usec): min=40908, max=44060, avg=41897.55, stdev=418.27 00:16:30.735 lat (usec): min=40935, max=55044, avg=42116.02, stdev=1609.76 00:16:30.735 clat percentiles (usec): 00:16:30.735 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:30.735 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:30.735 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:30.735 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:16:30.735 | 99.99th=[44303] 00:16:30.735 bw ( KiB/s): min= 88, max= 96, per=4.69%, avg=94.40, stdev= 3.58, samples=5 00:16:30.735 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:16:30.735 lat (msec) : 50=98.61% 00:16:30.735 cpu : usr=0.13%, sys=0.00%, ctx=75, majf=0, minf=1 00:16:30.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.735 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.735 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.735 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576704: Tue May 14 23:58:31 2024 00:16:30.735 read: IOPS=260, BW=1041KiB/s (1066kB/s)(3332KiB/3200msec) 00:16:30.736 slat (usec): min=8, max=35603, avg=86.34, stdev=1558.81 00:16:30.736 clat (usec): min=465, max=43018, avg=3727.48, stdev=11006.11 00:16:30.736 lat (usec): min=474, max=76939, avg=3813.89, stdev=11376.69 00:16:30.736 clat percentiles (usec): 00:16:30.736 | 1.00th=[ 482], 5.00th=[ 498], 10.00th=[ 502], 20.00th=[ 510], 00:16:30.736 | 30.00th=[ 515], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:16:30.736 | 70.00th=[ 553], 80.00th=[ 635], 90.00th=[ 783], 95.00th=[41681], 00:16:30.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:16:30.736 | 99.99th=[43254] 00:16:30.736 bw ( KiB/s): min= 84, max= 6160, per=55.03%, avg=1104.67, stdev=2476.60, samples=6 00:16:30.736 iops : min= 21, max= 1540, avg=276.17, stdev=619.15, samples=6 00:16:30.736 lat (usec) : 500=8.39%, 750=77.22%, 1000=6.35% 00:16:30.736 lat (msec) : 2=0.24%, 50=7.67% 00:16:30.736 cpu : usr=0.22%, sys=0.41%, ctx=839, majf=0, minf=1 00:16:30.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 issued rwts: total=834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.736 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576726: Tue May 14 23:58:31 2024 00:16:30.736 read: IOPS=169, BW=677KiB/s (693kB/s)(1920KiB/2838msec) 00:16:30.736 slat (nsec): min=9466, max=42883, avg=12850.19, stdev=5840.02 00:16:30.736 clat (usec): min=305, max=51969, avg=5850.80, stdev=13916.83 00:16:30.736 lat (usec): min=315, max=51998, avg=5863.62, stdev=13921.86 00:16:30.736 clat percentiles (usec): 00:16:30.736 | 1.00th=[ 314], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 383], 00:16:30.736 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[ 506], 60.00th=[ 523], 00:16:30.736 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[41681], 95.00th=[42206], 00:16:30.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[52167], 99.95th=[52167], 00:16:30.736 | 99.99th=[52167] 00:16:30.736 bw ( KiB/s): min= 96, max= 3384, per=37.63%, avg=755.20, stdev=1469.55, samples=5 00:16:30.736 iops : min= 24, max= 846, avg=188.80, stdev=367.39, samples=5 00:16:30.736 lat (usec) : 500=46.36%, 750=36.17%, 1000=3.95% 00:16:30.736 lat (msec) : 2=0.21%, 10=0.21%, 50=12.68%, 100=0.21% 00:16:30.736 cpu : usr=0.18%, sys=0.32%, ctx=484, majf=0, minf=1 00:16:30.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 issued rwts: total=481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.736 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576734: Tue May 14 23:58:31 2024 00:16:30.736 read: IOPS=83, BW=334KiB/s (342kB/s)(884KiB/2649msec) 00:16:30.736 slat (nsec): min=9154, max=34813, avg=14295.68, stdev=6669.80 00:16:30.736 clat (usec): min=332, max=44947, avg=11876.14, stdev=18525.79 00:16:30.736 lat (usec): min=342, max=44978, avg=11890.39, stdev=18531.76 00:16:30.736 clat percentiles (usec): 00:16:30.736 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:16:30.736 | 30.00th=[ 383], 40.00th=[ 416], 50.00th=[ 461], 60.00th=[ 529], 00:16:30.736 | 70.00th=[ 783], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:30.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:16:30.736 | 99.99th=[44827] 00:16:30.736 bw ( KiB/s): min= 96, max= 1360, per=17.35%, avg=348.80, stdev=565.28, samples=5 00:16:30.736 iops : min= 24, max= 340, avg=87.20, stdev=141.32, samples=5 00:16:30.736 lat (usec) : 500=54.50%, 750=8.56%, 1000=9.01% 00:16:30.736 lat (msec) : 50=27.48% 00:16:30.736 cpu : usr=0.00%, sys=0.26%, ctx=222, majf=0, minf=2 00:16:30.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.736 issued rwts: total=222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.736 00:16:30.736 Run status group 0 (all jobs): 00:16:30.736 READ: bw=2006KiB/s (2054kB/s), 94.3KiB/s-1041KiB/s (96.6kB/s-1066kB/s), io=6420KiB (6574kB), run=2649-3200msec 00:16:30.736 00:16:30.736 Disk stats (read/write): 00:16:30.736 nvme0n1: ios=67/0, merge=0/0, ticks=2811/0, in_queue=2811, util=94.36% 00:16:30.736 nvme0n2: ios=830/0, merge=0/0, ticks=2968/0, in_queue=2968, util=93.52% 00:16:30.736 nvme0n3: ios=520/0, merge=0/0, ticks=3768/0, in_queue=3768, util=99.28% 00:16:30.736 nvme0n4: ios=219/0, merge=0/0, ticks=2540/0, in_queue=2540, util=96.48% 00:16:30.736 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.736 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:30.994 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.994 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:31.252 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.252 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:31.510 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.510 23:58:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:31.510 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:31.510 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3576500 00:16:31.510 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:31.510 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:31.769 nvmf hotplug test: fio failed as expected 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.769 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.769 rmmod nvme_tcp 00:16:32.027 rmmod nvme_fabrics 00:16:32.027 rmmod nvme_keyring 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3573413 ']' 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3573413 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3573413 ']' 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3573413 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573413 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573413' 00:16:32.027 killing process with pid 3573413 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3573413 00:16:32.027 [2024-05-14 23:58:32.471895] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:32.027 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3573413 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.286 23:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.188 23:58:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.188 00:16:34.188 real 0m28.154s 00:16:34.188 user 2m2.142s 00:16:34.188 sys 0m9.561s 00:16:34.188 23:58:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:34.188 23:58:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.188 ************************************ 00:16:34.188 END TEST nvmf_fio_target 00:16:34.188 ************************************ 00:16:34.446 23:58:34 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:34.446 23:58:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:34.446 23:58:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.446 23:58:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.446 ************************************ 00:16:34.446 START TEST nvmf_bdevio 00:16:34.446 ************************************ 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:34.446 * Looking for test storage... 00:16:34.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.446 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.447 23:58:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.005 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:41.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:41.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:41.006 Found net devices under 0000:af:00.0: cvl_0_0 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:41.006 Found net devices under 0000:af:00.1: cvl_0_1 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.006 23:58:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:16:41.006 00:16:41.006 --- 10.0.0.2 ping statistics --- 00:16:41.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.006 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:41.006 00:16:41.006 --- 10.0.0.1 ping statistics --- 00:16:41.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.006 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3581178 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3581178 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3581178 ']' 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:41.006 23:58:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.006 [2024-05-14 23:58:41.322590] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:41.006 [2024-05-14 23:58:41.322637] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.006 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.006 [2024-05-14 23:58:41.395726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.006 [2024-05-14 23:58:41.467353] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.006 [2024-05-14 23:58:41.467393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.006 [2024-05-14 23:58:41.467403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.006 [2024-05-14 23:58:41.467412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.006 [2024-05-14 23:58:41.467422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.006 [2024-05-14 23:58:41.467477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.006 [2024-05-14 23:58:41.467592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:41.006 [2024-05-14 23:58:41.467703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.006 [2024-05-14 23:58:41.467704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:41.571 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.571 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:16:41.571 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.571 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.571 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 [2024-05-14 23:58:42.174097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 Malloc0 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.828 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:41.829 [2024-05-14 23:58:42.228503] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:41.829 [2024-05-14 23:58:42.228779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.829 { 00:16:41.829 "params": { 00:16:41.829 "name": "Nvme$subsystem", 00:16:41.829 "trtype": "$TEST_TRANSPORT", 00:16:41.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.829 "adrfam": "ipv4", 00:16:41.829 "trsvcid": "$NVMF_PORT", 00:16:41.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.829 "hdgst": ${hdgst:-false}, 00:16:41.829 "ddgst": ${ddgst:-false} 00:16:41.829 }, 00:16:41.829 "method": "bdev_nvme_attach_controller" 00:16:41.829 } 00:16:41.829 EOF 00:16:41.829 )") 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:41.829 23:58:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.829 "params": { 00:16:41.829 "name": "Nvme1", 00:16:41.829 "trtype": "tcp", 00:16:41.829 "traddr": "10.0.0.2", 00:16:41.829 "adrfam": "ipv4", 00:16:41.829 "trsvcid": "4420", 00:16:41.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.829 "hdgst": false, 00:16:41.829 "ddgst": false 00:16:41.829 }, 00:16:41.829 "method": "bdev_nvme_attach_controller" 00:16:41.829 }' 00:16:41.829 [2024-05-14 23:58:42.280706] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:41.829 [2024-05-14 23:58:42.280755] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581429 ] 00:16:41.829 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.829 [2024-05-14 23:58:42.352380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.086 [2024-05-14 23:58:42.425557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.086 [2024-05-14 23:58:42.425652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.086 [2024-05-14 23:58:42.425652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.344 I/O targets: 00:16:42.344 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:42.344 00:16:42.344 00:16:42.344 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.344 http://cunit.sourceforge.net/ 00:16:42.344 00:16:42.344 00:16:42.344 Suite: bdevio tests on: Nvme1n1 00:16:42.344 Test: blockdev write read block ...passed 00:16:42.344 Test: blockdev write zeroes read block ...passed 00:16:42.344 Test: blockdev write zeroes read no split ...passed 00:16:42.344 Test: blockdev write zeroes read split ...passed 00:16:42.601 Test: blockdev write zeroes read split partial ...passed 00:16:42.601 Test: blockdev reset ...[2024-05-14 23:58:42.986066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:42.601 [2024-05-14 23:58:42.986131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18277b0 (9): Bad file descriptor 00:16:42.601 [2024-05-14 23:58:43.047597] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:42.601 passed 00:16:42.601 Test: blockdev write read 8 blocks ...passed 00:16:42.601 Test: blockdev write read size > 128k ...passed 00:16:42.601 Test: blockdev write read invalid size ...passed 00:16:42.601 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.601 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.601 Test: blockdev write read max offset ...passed 00:16:42.601 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.601 Test: blockdev writev readv 8 blocks ...passed 00:16:42.601 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.860 Test: blockdev writev readv block ...passed 00:16:42.860 Test: blockdev writev readv size > 128k ...passed 00:16:42.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.860 Test: blockdev comparev and writev ...[2024-05-14 23:58:43.234231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.234276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.234703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.234718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.234736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.234747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.235170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.235184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.235211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.235625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.235653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.860 [2024-05-14 23:58:43.235663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.860 passed 00:16:42.860 Test: blockdev nvme passthru rw ...passed 00:16:42.860 Test: blockdev nvme passthru vendor specific ...[2024-05-14 23:58:43.318874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.860 [2024-05-14 23:58:43.318895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.319194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.860 [2024-05-14 23:58:43.319207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.319505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.860 [2024-05-14 23:58:43.319517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.860 [2024-05-14 23:58:43.319813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.860 [2024-05-14 23:58:43.319825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.860 passed 00:16:42.860 Test: blockdev nvme admin passthru ...passed 00:16:42.860 Test: blockdev copy ...passed 00:16:42.860 00:16:42.860 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.860 suites 1 1 n/a 0 0 00:16:42.860 tests 23 23 23 0 0 00:16:42.860 asserts 152 152 152 0 n/a 00:16:42.860 00:16:42.860 Elapsed time = 1.255 seconds 00:16:43.117 23:58:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.117 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.117 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.118 rmmod nvme_tcp 00:16:43.118 rmmod nvme_fabrics 00:16:43.118 rmmod nvme_keyring 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3581178 ']' 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3581178 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3581178 ']' 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3581178 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3581178 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3581178' 00:16:43.118 killing process with pid 3581178 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3581178 00:16:43.118 [2024-05-14 23:58:43.695419] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:43.118 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3581178 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.376 23:58:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.967 23:58:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.967 00:16:45.967 real 0m11.144s 00:16:45.967 user 0m13.832s 00:16:45.967 sys 0m5.367s 00:16:45.967 23:58:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:45.967 23:58:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:45.967 ************************************ 00:16:45.967 END TEST nvmf_bdevio 00:16:45.967 ************************************ 00:16:45.967 23:58:46 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:45.967 23:58:46 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:45.968 23:58:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:45.968 23:58:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:45.968 23:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.968 ************************************ 00:16:45.968 START TEST nvmf_bdevio_no_huge 00:16:45.968 ************************************ 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:45.968 * Looking for test storage... 00:16:45.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.968 23:58:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:52.532 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:52.532 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:52.532 Found net devices under 0000:af:00.0: cvl_0_0 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:52.532 Found net devices under 0000:af:00.1: cvl_0_1 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:16:52.532 00:16:52.532 --- 10.0.0.2 ping statistics --- 00:16:52.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.532 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:16:52.532 00:16:52.532 --- 10.0.0.1 ping statistics --- 00:16:52.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.532 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.532 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3585278 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3585278 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3585278 ']' 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.533 23:58:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:52.533 [2024-05-14 23:58:52.989048] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:52.533 [2024-05-14 23:58:52.989098] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:52.533 [2024-05-14 23:58:53.068677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.790 [2024-05-14 23:58:53.166864] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.790 [2024-05-14 23:58:53.166899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.790 [2024-05-14 23:58:53.166908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.790 [2024-05-14 23:58:53.166917] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.790 [2024-05-14 23:58:53.166924] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.790 [2024-05-14 23:58:53.166982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.790 [2024-05-14 23:58:53.167091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:52.790 [2024-05-14 23:58:53.167215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.790 [2024-05-14 23:58:53.167217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 [2024-05-14 23:58:53.823481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 Malloc0 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:53.357 [2024-05-14 23:58:53.860013] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:53.357 [2024-05-14 23:58:53.860241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.357 { 00:16:53.357 "params": { 00:16:53.357 "name": "Nvme$subsystem", 00:16:53.357 "trtype": "$TEST_TRANSPORT", 00:16:53.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.357 "adrfam": "ipv4", 00:16:53.357 "trsvcid": "$NVMF_PORT", 00:16:53.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.357 "hdgst": ${hdgst:-false}, 00:16:53.357 "ddgst": ${ddgst:-false} 00:16:53.357 }, 00:16:53.357 "method": "bdev_nvme_attach_controller" 00:16:53.357 } 00:16:53.357 EOF 00:16:53.357 )") 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:53.357 23:58:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.357 "params": { 00:16:53.357 "name": "Nvme1", 00:16:53.357 "trtype": "tcp", 00:16:53.357 "traddr": "10.0.0.2", 00:16:53.357 "adrfam": "ipv4", 00:16:53.357 "trsvcid": "4420", 00:16:53.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.357 "hdgst": false, 00:16:53.357 "ddgst": false 00:16:53.357 }, 00:16:53.357 "method": "bdev_nvme_attach_controller" 00:16:53.357 }' 00:16:53.357 [2024-05-14 23:58:53.909298] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:16:53.357 [2024-05-14 23:58:53.909349] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3585450 ] 00:16:53.616 [2024-05-14 23:58:53.984179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.616 [2024-05-14 23:58:54.084867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.616 [2024-05-14 23:58:54.084962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.616 [2024-05-14 23:58:54.084964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.874 I/O targets: 00:16:53.874 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:53.874 00:16:53.874 00:16:53.874 CUnit - A unit testing framework for C - Version 2.1-3 00:16:53.874 http://cunit.sourceforge.net/ 00:16:53.874 00:16:53.874 00:16:53.874 Suite: bdevio tests on: Nvme1n1 00:16:53.875 Test: blockdev write read block ...passed 00:16:54.133 Test: blockdev write zeroes read block ...passed 00:16:54.133 Test: blockdev write zeroes read no split ...passed 00:16:54.133 Test: blockdev write zeroes read split ...passed 00:16:54.133 Test: blockdev write zeroes read split partial ...passed 00:16:54.133 Test: blockdev reset ...[2024-05-14 23:58:54.615504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:54.133 [2024-05-14 23:58:54.615570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209e910 (9): Bad file descriptor 00:16:54.133 [2024-05-14 23:58:54.686888] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.133 passed 00:16:54.133 Test: blockdev write read 8 blocks ...passed 00:16:54.133 Test: blockdev write read size > 128k ...passed 00:16:54.133 Test: blockdev write read invalid size ...passed 00:16:54.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:54.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:54.391 Test: blockdev write read max offset ...passed 00:16:54.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:54.391 Test: blockdev writev readv 8 blocks ...passed 00:16:54.391 Test: blockdev writev readv 30 x 1block ...passed 00:16:54.391 Test: blockdev writev readv block ...passed 00:16:54.391 Test: blockdev writev readv size > 128k ...passed 00:16:54.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:54.391 Test: blockdev comparev and writev ...[2024-05-14 23:58:54.871284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.871316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.871332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.871343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.871769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.871782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.871796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.871807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.872248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.872276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.872286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.872701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.872733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:54.391 [2024-05-14 23:58:54.872743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:54.391 passed 00:16:54.391 Test: blockdev nvme passthru rw ...passed 00:16:54.391 Test: blockdev nvme passthru vendor specific ...[2024-05-14 23:58:54.955850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.391 [2024-05-14 23:58:54.955867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.956158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.391 [2024-05-14 23:58:54.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.956552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.391 [2024-05-14 23:58:54.956565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:54.391 [2024-05-14 23:58:54.956861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.391 [2024-05-14 23:58:54.956875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:54.391 passed 00:16:54.391 Test: blockdev nvme admin passthru ...passed 00:16:54.649 Test: blockdev copy ...passed 00:16:54.649 00:16:54.649 Run Summary: Type Total Ran Passed Failed Inactive 00:16:54.649 suites 1 1 n/a 0 0 00:16:54.649 tests 23 23 23 0 0 00:16:54.649 asserts 152 152 152 0 n/a 00:16:54.649 00:16:54.649 Elapsed time = 1.241 seconds 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.906 rmmod nvme_tcp 00:16:54.906 rmmod nvme_fabrics 00:16:54.906 rmmod nvme_keyring 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3585278 ']' 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3585278 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3585278 ']' 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3585278 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3585278 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3585278' 00:16:54.906 killing process with pid 3585278 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3585278 00:16:54.906 [2024-05-14 23:58:55.489867] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:54.906 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3585278 00:16:55.471 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.471 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.471 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.471 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.472 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.472 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.472 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.472 23:58:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.373 23:58:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.373 00:16:57.373 real 0m11.856s 00:16:57.373 user 0m14.896s 00:16:57.373 sys 0m6.159s 00:16:57.373 23:58:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:57.373 23:58:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.373 ************************************ 00:16:57.373 END TEST nvmf_bdevio_no_huge 00:16:57.373 ************************************ 00:16:57.632 23:58:57 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.632 23:58:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:57.632 23:58:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:57.632 23:58:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.632 ************************************ 00:16:57.632 START TEST nvmf_tls 00:16:57.632 ************************************ 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.632 * Looking for test storage... 00:16:57.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.632 23:58:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.633 23:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:04.200 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:04.200 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.200 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:04.201 Found net devices under 0000:af:00.0: cvl_0_0 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:04.201 Found net devices under 0000:af:00.1: cvl_0_1 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.201 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.460 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.460 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.460 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:17:04.460 00:17:04.460 --- 10.0.0.2 ping statistics --- 00:17:04.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.460 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:17:04.460 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:17:04.460 00:17:04.460 --- 10.0.0.1 ping statistics --- 00:17:04.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.460 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3589409 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3589409 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3589409 ']' 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:04.461 23:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.461 [2024-05-14 23:59:04.983624] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:04.461 [2024-05-14 23:59:04.983672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.461 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.720 [2024-05-14 23:59:05.058875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.720 [2024-05-14 23:59:05.125723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.720 [2024-05-14 23:59:05.125766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.720 [2024-05-14 23:59:05.125776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.720 [2024-05-14 23:59:05.125785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.720 [2024-05-14 23:59:05.125793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.720 [2024-05-14 23:59:05.125815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:05.286 23:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:05.544 true 00:17:05.544 23:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:05.544 23:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:05.803 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:05.803 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:05.803 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:05.803 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:05.803 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:06.061 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:06.061 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:06.061 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:06.318 23:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:06.576 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:06.576 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:06.576 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:06.836 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:06.836 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:06.836 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:06.836 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:06.836 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:07.094 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.094 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8knEUWcR42 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.MXx2bEKUA5 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8knEUWcR42 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MXx2bEKUA5 00:17:07.390 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:07.678 23:59:07 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:07.678 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8knEUWcR42 00:17:07.678 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8knEUWcR42 00:17:07.678 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.935 [2024-05-14 23:59:08.345914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.935 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:08.193 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:08.193 [2024-05-14 23:59:08.682741] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:08.193 [2024-05-14 23:59:08.682807] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.193 [2024-05-14 23:59:08.683025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.193 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:08.451 malloc0 00:17:08.451 23:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:08.451 23:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8knEUWcR42 00:17:08.709 [2024-05-14 23:59:09.148449] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:08.709 23:59:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8knEUWcR42 00:17:08.709 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.679 Initializing NVMe Controllers 00:17:18.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.679 Initialization complete. Launching workers. 00:17:18.679 ======================================================== 00:17:18.679 Latency(us) 00:17:18.679 Device Information : IOPS MiB/s Average min max 00:17:18.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16494.08 64.43 3880.56 768.45 5003.76 00:17:18.679 ======================================================== 00:17:18.679 Total : 16494.08 64.43 3880.56 768.45 5003.76 00:17:18.679 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8knEUWcR42 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8knEUWcR42' 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3591860 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3591860 /var/tmp/bdevperf.sock 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3591860 ']' 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.679 23:59:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.937 [2024-05-14 23:59:19.297694] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:18.937 [2024-05-14 23:59:19.297745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591860 ] 00:17:18.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.937 [2024-05-14 23:59:19.362180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.937 [2024-05-14 23:59:19.431342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.870 23:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.870 23:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:19.870 23:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8knEUWcR42 00:17:19.870 [2024-05-14 23:59:20.266000] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.870 [2024-05-14 23:59:20.266076] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.870 TLSTESTn1 00:17:19.870 23:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:19.870 Running I/O for 10 seconds... 00:17:32.061 00:17:32.061 Latency(us) 00:17:32.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.061 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.061 Verification LBA range: start 0x0 length 0x2000 00:17:32.061 TLSTESTn1 : 10.05 2095.07 8.18 0.00 0.00 60955.97 7077.89 98985.57 00:17:32.061 =================================================================================================================== 00:17:32.061 Total : 2095.07 8.18 0.00 0.00 60955.97 7077.89 98985.57 00:17:32.061 0 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3591860 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3591860 ']' 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3591860 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3591860 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3591860' 00:17:32.061 killing process with pid 3591860 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3591860 00:17:32.061 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.061 00:17:32.061 Latency(us) 00:17:32.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.061 =================================================================================================================== 00:17:32.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.061 [2024-05-14 23:59:30.614729] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3591860 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MXx2bEKUA5 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:32.061 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MXx2bEKUA5 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MXx2bEKUA5 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MXx2bEKUA5' 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3593901 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3593901 /var/tmp/bdevperf.sock 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3593901 ']' 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.062 23:59:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.062 [2024-05-14 23:59:30.866572] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:32.062 [2024-05-14 23:59:30.866625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593901 ] 00:17:32.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.062 [2024-05-14 23:59:30.933551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.062 [2024-05-14 23:59:31.008530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MXx2bEKUA5 00:17:32.062 [2024-05-14 23:59:31.799537] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.062 [2024-05-14 23:59:31.799612] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.062 [2024-05-14 23:59:31.810489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.062 [2024-05-14 23:59:31.811009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca610 (107): Transport endpoint is not connected 00:17:32.062 [2024-05-14 23:59:31.812000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca610 (9): Bad file descriptor 00:17:32.062 [2024-05-14 23:59:31.813002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.062 [2024-05-14 23:59:31.813014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:32.062 [2024-05-14 23:59:31.813026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.062 request: 00:17:32.062 { 00:17:32.062 "name": "TLSTEST", 00:17:32.062 "trtype": "tcp", 00:17:32.062 "traddr": "10.0.0.2", 00:17:32.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.062 "adrfam": "ipv4", 00:17:32.062 "trsvcid": "4420", 00:17:32.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.062 "psk": "/tmp/tmp.MXx2bEKUA5", 00:17:32.062 "method": "bdev_nvme_attach_controller", 00:17:32.062 "req_id": 1 00:17:32.062 } 00:17:32.062 Got JSON-RPC error response 00:17:32.062 response: 00:17:32.062 { 00:17:32.062 "code": -32602, 00:17:32.062 "message": "Invalid parameters" 00:17:32.062 } 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3593901 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3593901 ']' 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3593901 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3593901 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3593901' 00:17:32.062 killing process with pid 3593901 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3593901 00:17:32.062 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.062 00:17:32.062 Latency(us) 00:17:32.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.062 =================================================================================================================== 00:17:32.062 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.062 [2024-05-14 23:59:31.887664] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.062 23:59:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3593901 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8knEUWcR42 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8knEUWcR42 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8knEUWcR42 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8knEUWcR42' 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3594035 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3594035 /var/tmp/bdevperf.sock 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3594035 ']' 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.062 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.062 [2024-05-14 23:59:32.128995] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:32.062 [2024-05-14 23:59:32.129049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594035 ] 00:17:32.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.062 [2024-05-14 23:59:32.197034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.062 [2024-05-14 23:59:32.265439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.630 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.630 23:59:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:32.630 23:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8knEUWcR42 00:17:32.630 [2024-05-14 23:59:33.075997] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.630 [2024-05-14 23:59:33.076073] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.630 [2024-05-14 23:59:33.083612] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:32.630 [2024-05-14 23:59:33.083638] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:32.630 [2024-05-14 23:59:33.083665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.630 [2024-05-14 23:59:33.084434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8610 (107): Transport endpoint is not connected 00:17:32.630 [2024-05-14 23:59:33.085426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8610 (9): Bad file descriptor 00:17:32.630 [2024-05-14 23:59:33.086427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.630 [2024-05-14 23:59:33.086439] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:32.630 [2024-05-14 23:59:33.086450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.630 request: 00:17:32.630 { 00:17:32.630 "name": "TLSTEST", 00:17:32.630 "trtype": "tcp", 00:17:32.630 "traddr": "10.0.0.2", 00:17:32.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:32.630 "adrfam": "ipv4", 00:17:32.630 "trsvcid": "4420", 00:17:32.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.630 "psk": "/tmp/tmp.8knEUWcR42", 00:17:32.630 "method": "bdev_nvme_attach_controller", 00:17:32.630 "req_id": 1 00:17:32.630 } 00:17:32.630 Got JSON-RPC error response 00:17:32.630 response: 00:17:32.630 { 00:17:32.630 "code": -32602, 00:17:32.630 "message": "Invalid parameters" 00:17:32.630 } 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3594035 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3594035 ']' 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3594035 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3594035 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3594035' 00:17:32.630 killing process with pid 3594035 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3594035 00:17:32.630 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.630 00:17:32.630 Latency(us) 00:17:32.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.630 =================================================================================================================== 00:17:32.630 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.630 [2024-05-14 23:59:33.161104] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.630 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3594035 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8knEUWcR42 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8knEUWcR42 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8knEUWcR42 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8knEUWcR42' 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3594277 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3594277 /var/tmp/bdevperf.sock 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3594277 ']' 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.889 23:59:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 [2024-05-14 23:59:33.404271] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:32.889 [2024-05-14 23:59:33.404322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594277 ] 00:17:32.889 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.889 [2024-05-14 23:59:33.470018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.147 [2024-05-14 23:59:33.535165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.713 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.713 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:33.713 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8knEUWcR42 00:17:33.973 [2024-05-14 23:59:34.336904] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.973 [2024-05-14 23:59:34.336980] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:33.973 [2024-05-14 23:59:34.346795] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:33.973 [2024-05-14 23:59:34.346819] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:33.973 [2024-05-14 23:59:34.346847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:33.973 [2024-05-14 23:59:34.347343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1678610 (107): Transport endpoint is not connected 00:17:33.973 [2024-05-14 23:59:34.348335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1678610 (9): Bad file descriptor 00:17:33.973 [2024-05-14 23:59:34.349336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:33.973 [2024-05-14 23:59:34.349349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:33.973 [2024-05-14 23:59:34.349360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:33.973 request: 00:17:33.973 { 00:17:33.973 "name": "TLSTEST", 00:17:33.973 "trtype": "tcp", 00:17:33.973 "traddr": "10.0.0.2", 00:17:33.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.973 "adrfam": "ipv4", 00:17:33.973 "trsvcid": "4420", 00:17:33.973 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:33.973 "psk": "/tmp/tmp.8knEUWcR42", 00:17:33.973 "method": "bdev_nvme_attach_controller", 00:17:33.973 "req_id": 1 00:17:33.973 } 00:17:33.973 Got JSON-RPC error response 00:17:33.973 response: 00:17:33.973 { 00:17:33.973 "code": -32602, 00:17:33.973 "message": "Invalid parameters" 00:17:33.973 } 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3594277 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3594277 ']' 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3594277 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3594277 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3594277' 00:17:33.973 killing process with pid 3594277 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3594277 00:17:33.973 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.973 00:17:33.973 Latency(us) 00:17:33.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.973 =================================================================================================================== 00:17:33.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.973 [2024-05-14 23:59:34.426464] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:33.973 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3594277 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3594545 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3594545 /var/tmp/bdevperf.sock 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3594545 ']' 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.232 23:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.232 [2024-05-14 23:59:34.668280] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:34.232 [2024-05-14 23:59:34.668331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594545 ] 00:17:34.232 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.232 [2024-05-14 23:59:34.733658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.232 [2024-05-14 23:59:34.797692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.165 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:35.165 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:35.165 23:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:35.165 [2024-05-14 23:59:35.630050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:35.165 [2024-05-14 23:59:35.631838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c2cc0 (9): Bad file descriptor 00:17:35.165 [2024-05-14 23:59:35.632836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:35.165 [2024-05-14 23:59:35.632850] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:35.165 [2024-05-14 23:59:35.632861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:35.165 request: 00:17:35.165 { 00:17:35.165 "name": "TLSTEST", 00:17:35.165 "trtype": "tcp", 00:17:35.165 "traddr": "10.0.0.2", 00:17:35.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.165 "adrfam": "ipv4", 00:17:35.165 "trsvcid": "4420", 00:17:35.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.165 "method": "bdev_nvme_attach_controller", 00:17:35.165 "req_id": 1 00:17:35.165 } 00:17:35.165 Got JSON-RPC error response 00:17:35.165 response: 00:17:35.165 { 00:17:35.165 "code": -32602, 00:17:35.166 "message": "Invalid parameters" 00:17:35.166 } 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3594545 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3594545 ']' 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3594545 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3594545 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3594545' 00:17:35.166 killing process with pid 3594545 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3594545 00:17:35.166 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.166 00:17:35.166 Latency(us) 00:17:35.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.166 =================================================================================================================== 00:17:35.166 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.166 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3594545 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3589409 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3589409 ']' 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3589409 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3589409 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3589409' 00:17:35.424 killing process with pid 3589409 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3589409 00:17:35.424 [2024-05-14 23:59:35.958000] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:35.424 [2024-05-14 23:59:35.958037] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:35.424 23:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3589409 00:17:35.682 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UD401VpRKp 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UD401VpRKp 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3594834 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3594834 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3594834 ']' 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:35.683 23:59:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.940 [2024-05-14 23:59:36.280895] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:35.940 [2024-05-14 23:59:36.280945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.940 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.941 [2024-05-14 23:59:36.355678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.941 [2024-05-14 23:59:36.426448] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.941 [2024-05-14 23:59:36.426487] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.941 [2024-05-14 23:59:36.426496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.941 [2024-05-14 23:59:36.426504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.941 [2024-05-14 23:59:36.426511] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.941 [2024-05-14 23:59:36.426532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.504 23:59:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:36.504 23:59:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:36.504 23:59:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.504 23:59:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.504 23:59:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.761 23:59:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.761 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:17:36.761 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UD401VpRKp 00:17:36.761 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.761 [2024-05-14 23:59:37.273002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.761 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.051 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.051 [2024-05-14 23:59:37.597819] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:37.051 [2024-05-14 23:59:37.597883] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.051 [2024-05-14 23:59:37.598074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.051 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.308 malloc0 00:17:37.308 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.566 23:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:37.566 [2024-05-14 23:59:38.095291] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:37.566 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UD401VpRKp 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UD401VpRKp' 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3595127 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3595127 /var/tmp/bdevperf.sock 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3595127 ']' 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.567 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.567 [2024-05-14 23:59:38.153741] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:37.567 [2024-05-14 23:59:38.153790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595127 ] 00:17:37.825 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.825 [2024-05-14 23:59:38.218120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.825 [2024-05-14 23:59:38.291332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.391 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.391 23:59:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:38.391 23:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:38.649 [2024-05-14 23:59:39.085989] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.649 [2024-05-14 23:59:39.086074] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.649 TLSTESTn1 00:17:38.649 23:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:38.907 Running I/O for 10 seconds... 00:17:48.864 00:17:48.864 Latency(us) 00:17:48.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.864 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.864 Verification LBA range: start 0x0 length 0x2000 00:17:48.864 TLSTESTn1 : 10.05 2135.19 8.34 0.00 0.00 59803.31 5111.81 98146.71 00:17:48.865 =================================================================================================================== 00:17:48.865 Total : 2135.19 8.34 0.00 0.00 59803.31 5111.81 98146.71 00:17:48.865 0 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3595127 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3595127 ']' 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3595127 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3595127 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3595127' 00:17:48.865 killing process with pid 3595127 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3595127 00:17:48.865 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.865 00:17:48.865 Latency(us) 00:17:48.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.865 =================================================================================================================== 00:17:48.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.865 [2024-05-14 23:59:49.403371] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:48.865 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3595127 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UD401VpRKp 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UD401VpRKp 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UD401VpRKp 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UD401VpRKp 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UD401VpRKp' 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3597133 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3597133 /var/tmp/bdevperf.sock 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3597133 ']' 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.121 23:59:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.121 [2024-05-14 23:59:49.662050] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:49.121 [2024-05-14 23:59:49.662101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597133 ] 00:17:49.121 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.377 [2024-05-14 23:59:49.728368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.377 [2024-05-14 23:59:49.804137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.942 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:49.942 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:49.942 23:59:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:50.200 [2024-05-14 23:59:50.635050] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.200 [2024-05-14 23:59:50.635101] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:50.200 [2024-05-14 23:59:50.635110] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UD401VpRKp 00:17:50.200 request: 00:17:50.200 { 00:17:50.200 "name": "TLSTEST", 00:17:50.200 "trtype": "tcp", 00:17:50.200 "traddr": "10.0.0.2", 00:17:50.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.200 "adrfam": "ipv4", 00:17:50.200 "trsvcid": "4420", 00:17:50.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.200 "psk": "/tmp/tmp.UD401VpRKp", 00:17:50.200 "method": "bdev_nvme_attach_controller", 00:17:50.200 "req_id": 1 00:17:50.200 } 00:17:50.200 Got JSON-RPC error response 00:17:50.200 response: 00:17:50.200 { 00:17:50.200 "code": -1, 00:17:50.200 "message": "Operation not permitted" 00:17:50.200 } 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3597133 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3597133 ']' 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3597133 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.200 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3597133 00:17:50.201 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:50.201 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:50.201 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3597133' 00:17:50.201 killing process with pid 3597133 00:17:50.201 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3597133 00:17:50.201 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.201 00:17:50.201 Latency(us) 00:17:50.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.201 =================================================================================================================== 00:17:50.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.201 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3597133 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3594834 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3594834 ']' 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3594834 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3594834 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3594834' 00:17:50.459 killing process with pid 3594834 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3594834 00:17:50.459 [2024-05-14 23:59:50.958523] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:50.459 [2024-05-14 23:59:50.958563] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:50.459 23:59:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3594834 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3597410 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3597410 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3597410 ']' 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.718 23:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.718 [2024-05-14 23:59:51.212906] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:50.718 [2024-05-14 23:59:51.212959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.718 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.718 [2024-05-14 23:59:51.285501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.976 [2024-05-14 23:59:51.354162] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.976 [2024-05-14 23:59:51.354208] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.976 [2024-05-14 23:59:51.354233] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.976 [2024-05-14 23:59:51.354242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.976 [2024-05-14 23:59:51.354249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.976 [2024-05-14 23:59:51.354271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UD401VpRKp 00:17:51.542 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.799 [2024-05-14 23:59:52.221129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.799 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:52.056 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:52.056 [2024-05-14 23:59:52.541921] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:52.056 [2024-05-14 23:59:52.541988] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.057 [2024-05-14 23:59:52.542186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.057 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.315 malloc0 00:17:52.315 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.315 23:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:52.573 [2024-05-14 23:59:53.051489] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:52.573 [2024-05-14 23:59:53.051515] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:52.573 [2024-05-14 23:59:53.051539] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:52.573 request: 00:17:52.573 { 00:17:52.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.573 "host": "nqn.2016-06.io.spdk:host1", 00:17:52.573 "psk": "/tmp/tmp.UD401VpRKp", 00:17:52.573 "method": "nvmf_subsystem_add_host", 00:17:52.573 "req_id": 1 00:17:52.573 } 00:17:52.573 Got JSON-RPC error response 00:17:52.573 response: 00:17:52.573 { 00:17:52.573 "code": -32603, 00:17:52.573 "message": "Internal error" 00:17:52.573 } 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3597410 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3597410 ']' 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3597410 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3597410 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3597410' 00:17:52.573 killing process with pid 3597410 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3597410 00:17:52.573 [2024-05-14 23:59:53.125308] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:52.573 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3597410 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UD401VpRKp 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3597829 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3597829 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3597829 ']' 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:52.831 23:59:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.831 [2024-05-14 23:59:53.395135] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:52.831 [2024-05-14 23:59:53.395186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.088 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.088 [2024-05-14 23:59:53.469232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.088 [2024-05-14 23:59:53.541574] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.088 [2024-05-14 23:59:53.541612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.088 [2024-05-14 23:59:53.541622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.088 [2024-05-14 23:59:53.541630] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.088 [2024-05-14 23:59:53.541638] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.088 [2024-05-14 23:59:53.541664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UD401VpRKp 00:17:53.653 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.911 [2024-05-14 23:59:54.396403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.911 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.168 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.168 [2024-05-14 23:59:54.721208] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:54.168 [2024-05-14 23:59:54.721278] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.168 [2024-05-14 23:59:54.721467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.168 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.425 malloc0 00:17:54.425 23:59:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.682 23:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:54.682 [2024-05-14 23:59:55.222910] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.682 23:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3598124 00:17:54.682 23:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3598124 /var/tmp/bdevperf.sock 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3598124 ']' 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:54.683 23:59:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.940 [2024-05-14 23:59:55.280632] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:54.940 [2024-05-14 23:59:55.280679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598124 ] 00:17:54.940 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.940 [2024-05-14 23:59:55.344866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.940 [2024-05-14 23:59:55.413622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.505 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:55.505 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:55.505 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:17:55.763 [2024-05-14 23:59:56.228301] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.763 [2024-05-14 23:59:56.228388] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.763 TLSTESTn1 00:17:55.763 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:56.020 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:56.020 "subsystems": [ 00:17:56.020 { 00:17:56.020 "subsystem": "keyring", 00:17:56.020 "config": [] 00:17:56.020 }, 00:17:56.020 { 00:17:56.020 "subsystem": "iobuf", 00:17:56.020 "config": [ 00:17:56.020 { 00:17:56.020 "method": "iobuf_set_options", 00:17:56.020 "params": { 00:17:56.020 "small_pool_count": 8192, 00:17:56.020 "large_pool_count": 1024, 00:17:56.020 "small_bufsize": 8192, 00:17:56.020 "large_bufsize": 135168 00:17:56.020 } 00:17:56.020 } 00:17:56.020 ] 00:17:56.020 }, 00:17:56.020 { 00:17:56.020 "subsystem": "sock", 00:17:56.020 "config": [ 00:17:56.020 { 00:17:56.020 "method": "sock_impl_set_options", 00:17:56.020 "params": { 00:17:56.020 "impl_name": "posix", 00:17:56.020 "recv_buf_size": 2097152, 00:17:56.020 "send_buf_size": 2097152, 00:17:56.020 "enable_recv_pipe": true, 00:17:56.020 "enable_quickack": false, 00:17:56.020 "enable_placement_id": 0, 00:17:56.020 "enable_zerocopy_send_server": true, 00:17:56.020 "enable_zerocopy_send_client": false, 00:17:56.020 "zerocopy_threshold": 0, 00:17:56.020 "tls_version": 0, 00:17:56.020 "enable_ktls": false 00:17:56.020 } 00:17:56.020 }, 00:17:56.020 { 00:17:56.020 "method": "sock_impl_set_options", 00:17:56.020 "params": { 00:17:56.020 "impl_name": "ssl", 00:17:56.020 "recv_buf_size": 4096, 00:17:56.020 "send_buf_size": 4096, 00:17:56.020 "enable_recv_pipe": true, 00:17:56.020 "enable_quickack": false, 00:17:56.020 "enable_placement_id": 0, 00:17:56.020 "enable_zerocopy_send_server": true, 00:17:56.021 "enable_zerocopy_send_client": false, 00:17:56.021 "zerocopy_threshold": 0, 00:17:56.021 "tls_version": 0, 00:17:56.021 "enable_ktls": false 00:17:56.021 } 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "vmd", 00:17:56.021 "config": [] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "accel", 00:17:56.021 "config": [ 00:17:56.021 { 00:17:56.021 "method": "accel_set_options", 00:17:56.021 "params": { 00:17:56.021 "small_cache_size": 128, 00:17:56.021 "large_cache_size": 16, 00:17:56.021 "task_count": 2048, 00:17:56.021 "sequence_count": 2048, 00:17:56.021 "buf_count": 2048 00:17:56.021 } 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "bdev", 00:17:56.021 "config": [ 00:17:56.021 { 00:17:56.021 "method": "bdev_set_options", 00:17:56.021 "params": { 00:17:56.021 "bdev_io_pool_size": 65535, 00:17:56.021 "bdev_io_cache_size": 256, 00:17:56.021 "bdev_auto_examine": true, 00:17:56.021 "iobuf_small_cache_size": 128, 00:17:56.021 "iobuf_large_cache_size": 16 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_raid_set_options", 00:17:56.021 "params": { 00:17:56.021 "process_window_size_kb": 1024 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_iscsi_set_options", 00:17:56.021 "params": { 00:17:56.021 "timeout_sec": 30 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_nvme_set_options", 00:17:56.021 "params": { 00:17:56.021 "action_on_timeout": "none", 00:17:56.021 "timeout_us": 0, 00:17:56.021 "timeout_admin_us": 0, 00:17:56.021 "keep_alive_timeout_ms": 10000, 00:17:56.021 "arbitration_burst": 0, 00:17:56.021 "low_priority_weight": 0, 00:17:56.021 "medium_priority_weight": 0, 00:17:56.021 "high_priority_weight": 0, 00:17:56.021 "nvme_adminq_poll_period_us": 10000, 00:17:56.021 "nvme_ioq_poll_period_us": 0, 00:17:56.021 "io_queue_requests": 0, 00:17:56.021 "delay_cmd_submit": true, 00:17:56.021 "transport_retry_count": 4, 00:17:56.021 "bdev_retry_count": 3, 00:17:56.021 "transport_ack_timeout": 0, 00:17:56.021 "ctrlr_loss_timeout_sec": 0, 00:17:56.021 "reconnect_delay_sec": 0, 00:17:56.021 "fast_io_fail_timeout_sec": 0, 00:17:56.021 "disable_auto_failback": false, 00:17:56.021 "generate_uuids": false, 00:17:56.021 "transport_tos": 0, 00:17:56.021 "nvme_error_stat": false, 00:17:56.021 "rdma_srq_size": 0, 00:17:56.021 "io_path_stat": false, 00:17:56.021 "allow_accel_sequence": false, 00:17:56.021 "rdma_max_cq_size": 0, 00:17:56.021 "rdma_cm_event_timeout_ms": 0, 00:17:56.021 "dhchap_digests": [ 00:17:56.021 "sha256", 00:17:56.021 "sha384", 00:17:56.021 "sha512" 00:17:56.021 ], 00:17:56.021 "dhchap_dhgroups": [ 00:17:56.021 "null", 00:17:56.021 "ffdhe2048", 00:17:56.021 "ffdhe3072", 00:17:56.021 "ffdhe4096", 00:17:56.021 "ffdhe6144", 00:17:56.021 "ffdhe8192" 00:17:56.021 ] 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_nvme_set_hotplug", 00:17:56.021 "params": { 00:17:56.021 "period_us": 100000, 00:17:56.021 "enable": false 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_malloc_create", 00:17:56.021 "params": { 00:17:56.021 "name": "malloc0", 00:17:56.021 "num_blocks": 8192, 00:17:56.021 "block_size": 4096, 00:17:56.021 "physical_block_size": 4096, 00:17:56.021 "uuid": "a3c6064c-9454-46d9-bd8b-c2007a70dfc2", 00:17:56.021 "optimal_io_boundary": 0 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "bdev_wait_for_examine" 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "nbd", 00:17:56.021 "config": [] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "scheduler", 00:17:56.021 "config": [ 00:17:56.021 { 00:17:56.021 "method": "framework_set_scheduler", 00:17:56.021 "params": { 00:17:56.021 "name": "static" 00:17:56.021 } 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "subsystem": "nvmf", 00:17:56.021 "config": [ 00:17:56.021 { 00:17:56.021 "method": "nvmf_set_config", 00:17:56.021 "params": { 00:17:56.021 "discovery_filter": "match_any", 00:17:56.021 "admin_cmd_passthru": { 00:17:56.021 "identify_ctrlr": false 00:17:56.021 } 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_set_max_subsystems", 00:17:56.021 "params": { 00:17:56.021 "max_subsystems": 1024 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_set_crdt", 00:17:56.021 "params": { 00:17:56.021 "crdt1": 0, 00:17:56.021 "crdt2": 0, 00:17:56.021 "crdt3": 0 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_create_transport", 00:17:56.021 "params": { 00:17:56.021 "trtype": "TCP", 00:17:56.021 "max_queue_depth": 128, 00:17:56.021 "max_io_qpairs_per_ctrlr": 127, 00:17:56.021 "in_capsule_data_size": 4096, 00:17:56.021 "max_io_size": 131072, 00:17:56.021 "io_unit_size": 131072, 00:17:56.021 "max_aq_depth": 128, 00:17:56.021 "num_shared_buffers": 511, 00:17:56.021 "buf_cache_size": 4294967295, 00:17:56.021 "dif_insert_or_strip": false, 00:17:56.021 "zcopy": false, 00:17:56.021 "c2h_success": false, 00:17:56.021 "sock_priority": 0, 00:17:56.021 "abort_timeout_sec": 1, 00:17:56.021 "ack_timeout": 0, 00:17:56.021 "data_wr_pool_size": 0 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_create_subsystem", 00:17:56.021 "params": { 00:17:56.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.021 "allow_any_host": false, 00:17:56.021 "serial_number": "SPDK00000000000001", 00:17:56.021 "model_number": "SPDK bdev Controller", 00:17:56.021 "max_namespaces": 10, 00:17:56.021 "min_cntlid": 1, 00:17:56.021 "max_cntlid": 65519, 00:17:56.021 "ana_reporting": false 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_subsystem_add_host", 00:17:56.021 "params": { 00:17:56.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.021 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.021 "psk": "/tmp/tmp.UD401VpRKp" 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_subsystem_add_ns", 00:17:56.021 "params": { 00:17:56.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.021 "namespace": { 00:17:56.021 "nsid": 1, 00:17:56.021 "bdev_name": "malloc0", 00:17:56.021 "nguid": "A3C6064C945446D9BD8BC2007A70DFC2", 00:17:56.021 "uuid": "a3c6064c-9454-46d9-bd8b-c2007a70dfc2", 00:17:56.021 "no_auto_visible": false 00:17:56.021 } 00:17:56.021 } 00:17:56.021 }, 00:17:56.021 { 00:17:56.021 "method": "nvmf_subsystem_add_listener", 00:17:56.021 "params": { 00:17:56.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.021 "listen_address": { 00:17:56.021 "trtype": "TCP", 00:17:56.021 "adrfam": "IPv4", 00:17:56.021 "traddr": "10.0.0.2", 00:17:56.021 "trsvcid": "4420" 00:17:56.021 }, 00:17:56.021 "secure_channel": true 00:17:56.021 } 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 } 00:17:56.021 ] 00:17:56.021 }' 00:17:56.021 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:56.279 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:56.279 "subsystems": [ 00:17:56.279 { 00:17:56.279 "subsystem": "keyring", 00:17:56.279 "config": [] 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "subsystem": "iobuf", 00:17:56.279 "config": [ 00:17:56.279 { 00:17:56.279 "method": "iobuf_set_options", 00:17:56.279 "params": { 00:17:56.279 "small_pool_count": 8192, 00:17:56.279 "large_pool_count": 1024, 00:17:56.279 "small_bufsize": 8192, 00:17:56.279 "large_bufsize": 135168 00:17:56.279 } 00:17:56.279 } 00:17:56.279 ] 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "subsystem": "sock", 00:17:56.279 "config": [ 00:17:56.279 { 00:17:56.279 "method": "sock_impl_set_options", 00:17:56.279 "params": { 00:17:56.279 "impl_name": "posix", 00:17:56.279 "recv_buf_size": 2097152, 00:17:56.279 "send_buf_size": 2097152, 00:17:56.279 "enable_recv_pipe": true, 00:17:56.279 "enable_quickack": false, 00:17:56.279 "enable_placement_id": 0, 00:17:56.279 "enable_zerocopy_send_server": true, 00:17:56.279 "enable_zerocopy_send_client": false, 00:17:56.279 "zerocopy_threshold": 0, 00:17:56.279 "tls_version": 0, 00:17:56.279 "enable_ktls": false 00:17:56.279 } 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "method": "sock_impl_set_options", 00:17:56.279 "params": { 00:17:56.279 "impl_name": "ssl", 00:17:56.279 "recv_buf_size": 4096, 00:17:56.279 "send_buf_size": 4096, 00:17:56.279 "enable_recv_pipe": true, 00:17:56.279 "enable_quickack": false, 00:17:56.279 "enable_placement_id": 0, 00:17:56.279 "enable_zerocopy_send_server": true, 00:17:56.279 "enable_zerocopy_send_client": false, 00:17:56.279 "zerocopy_threshold": 0, 00:17:56.279 "tls_version": 0, 00:17:56.279 "enable_ktls": false 00:17:56.279 } 00:17:56.279 } 00:17:56.279 ] 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "subsystem": "vmd", 00:17:56.279 "config": [] 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "subsystem": "accel", 00:17:56.279 "config": [ 00:17:56.279 { 00:17:56.279 "method": "accel_set_options", 00:17:56.279 "params": { 00:17:56.279 "small_cache_size": 128, 00:17:56.279 "large_cache_size": 16, 00:17:56.279 "task_count": 2048, 00:17:56.279 "sequence_count": 2048, 00:17:56.279 "buf_count": 2048 00:17:56.279 } 00:17:56.279 } 00:17:56.279 ] 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "subsystem": "bdev", 00:17:56.279 "config": [ 00:17:56.279 { 00:17:56.279 "method": "bdev_set_options", 00:17:56.279 "params": { 00:17:56.279 "bdev_io_pool_size": 65535, 00:17:56.279 "bdev_io_cache_size": 256, 00:17:56.279 "bdev_auto_examine": true, 00:17:56.279 "iobuf_small_cache_size": 128, 00:17:56.279 "iobuf_large_cache_size": 16 00:17:56.279 } 00:17:56.279 }, 00:17:56.279 { 00:17:56.279 "method": "bdev_raid_set_options", 00:17:56.279 "params": { 00:17:56.279 "process_window_size_kb": 1024 00:17:56.279 } 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "method": "bdev_iscsi_set_options", 00:17:56.280 "params": { 00:17:56.280 "timeout_sec": 30 00:17:56.280 } 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "method": "bdev_nvme_set_options", 00:17:56.280 "params": { 00:17:56.280 "action_on_timeout": "none", 00:17:56.280 "timeout_us": 0, 00:17:56.280 "timeout_admin_us": 0, 00:17:56.280 "keep_alive_timeout_ms": 10000, 00:17:56.280 "arbitration_burst": 0, 00:17:56.280 "low_priority_weight": 0, 00:17:56.280 "medium_priority_weight": 0, 00:17:56.280 "high_priority_weight": 0, 00:17:56.280 "nvme_adminq_poll_period_us": 10000, 00:17:56.280 "nvme_ioq_poll_period_us": 0, 00:17:56.280 "io_queue_requests": 512, 00:17:56.280 "delay_cmd_submit": true, 00:17:56.280 "transport_retry_count": 4, 00:17:56.280 "bdev_retry_count": 3, 00:17:56.280 "transport_ack_timeout": 0, 00:17:56.280 "ctrlr_loss_timeout_sec": 0, 00:17:56.280 "reconnect_delay_sec": 0, 00:17:56.280 "fast_io_fail_timeout_sec": 0, 00:17:56.280 "disable_auto_failback": false, 00:17:56.280 "generate_uuids": false, 00:17:56.280 "transport_tos": 0, 00:17:56.280 "nvme_error_stat": false, 00:17:56.280 "rdma_srq_size": 0, 00:17:56.280 "io_path_stat": false, 00:17:56.280 "allow_accel_sequence": false, 00:17:56.280 "rdma_max_cq_size": 0, 00:17:56.280 "rdma_cm_event_timeout_ms": 0, 00:17:56.280 "dhchap_digests": [ 00:17:56.280 "sha256", 00:17:56.280 "sha384", 00:17:56.280 "sha512" 00:17:56.280 ], 00:17:56.280 "dhchap_dhgroups": [ 00:17:56.280 "null", 00:17:56.280 "ffdhe2048", 00:17:56.280 "ffdhe3072", 00:17:56.280 "ffdhe4096", 00:17:56.280 "ffdhe6144", 00:17:56.280 "ffdhe8192" 00:17:56.280 ] 00:17:56.280 } 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "method": "bdev_nvme_attach_controller", 00:17:56.280 "params": { 00:17:56.280 "name": "TLSTEST", 00:17:56.280 "trtype": "TCP", 00:17:56.280 "adrfam": "IPv4", 00:17:56.280 "traddr": "10.0.0.2", 00:17:56.280 "trsvcid": "4420", 00:17:56.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.280 "prchk_reftag": false, 00:17:56.280 "prchk_guard": false, 00:17:56.280 "ctrlr_loss_timeout_sec": 0, 00:17:56.280 "reconnect_delay_sec": 0, 00:17:56.280 "fast_io_fail_timeout_sec": 0, 00:17:56.280 "psk": "/tmp/tmp.UD401VpRKp", 00:17:56.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.280 "hdgst": false, 00:17:56.280 "ddgst": false 00:17:56.280 } 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "method": "bdev_nvme_set_hotplug", 00:17:56.280 "params": { 00:17:56.280 "period_us": 100000, 00:17:56.280 "enable": false 00:17:56.280 } 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "method": "bdev_wait_for_examine" 00:17:56.280 } 00:17:56.280 ] 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "subsystem": "nbd", 00:17:56.280 "config": [] 00:17:56.280 } 00:17:56.280 ] 00:17:56.280 }' 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3598124 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3598124 ']' 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3598124 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.280 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3598124 00:17:56.537 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:56.537 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:56.537 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3598124' 00:17:56.537 killing process with pid 3598124 00:17:56.537 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3598124 00:17:56.537 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.537 00:17:56.537 Latency(us) 00:17:56.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.537 =================================================================================================================== 00:17:56.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.537 [2024-05-14 23:59:56.875249] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.537 23:59:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3598124 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3597829 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3597829 ']' 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3597829 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.537 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3597829 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3597829' 00:17:56.796 killing process with pid 3597829 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3597829 00:17:56.796 [2024-05-14 23:59:57.133106] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:56.796 [2024-05-14 23:59:57.133142] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3597829 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:56.796 23:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:56.796 "subsystems": [ 00:17:56.796 { 00:17:56.796 "subsystem": "keyring", 00:17:56.796 "config": [] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "iobuf", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "iobuf_set_options", 00:17:56.796 "params": { 00:17:56.796 "small_pool_count": 8192, 00:17:56.796 "large_pool_count": 1024, 00:17:56.796 "small_bufsize": 8192, 00:17:56.796 "large_bufsize": 135168 00:17:56.796 } 00:17:56.796 } 00:17:56.796 ] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "sock", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "sock_impl_set_options", 00:17:56.796 "params": { 00:17:56.796 "impl_name": "posix", 00:17:56.796 "recv_buf_size": 2097152, 00:17:56.796 "send_buf_size": 2097152, 00:17:56.796 "enable_recv_pipe": true, 00:17:56.796 "enable_quickack": false, 00:17:56.796 "enable_placement_id": 0, 00:17:56.796 "enable_zerocopy_send_server": true, 00:17:56.796 "enable_zerocopy_send_client": false, 00:17:56.796 "zerocopy_threshold": 0, 00:17:56.796 "tls_version": 0, 00:17:56.796 "enable_ktls": false 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "sock_impl_set_options", 00:17:56.796 "params": { 00:17:56.796 "impl_name": "ssl", 00:17:56.796 "recv_buf_size": 4096, 00:17:56.796 "send_buf_size": 4096, 00:17:56.796 "enable_recv_pipe": true, 00:17:56.796 "enable_quickack": false, 00:17:56.796 "enable_placement_id": 0, 00:17:56.796 "enable_zerocopy_send_server": true, 00:17:56.796 "enable_zerocopy_send_client": false, 00:17:56.796 "zerocopy_threshold": 0, 00:17:56.796 "tls_version": 0, 00:17:56.796 "enable_ktls": false 00:17:56.796 } 00:17:56.796 } 00:17:56.796 ] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "vmd", 00:17:56.796 "config": [] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "accel", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "accel_set_options", 00:17:56.796 "params": { 00:17:56.796 "small_cache_size": 128, 00:17:56.796 "large_cache_size": 16, 00:17:56.796 "task_count": 2048, 00:17:56.796 "sequence_count": 2048, 00:17:56.796 "buf_count": 2048 00:17:56.796 } 00:17:56.796 } 00:17:56.796 ] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "bdev", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "bdev_set_options", 00:17:56.796 "params": { 00:17:56.796 "bdev_io_pool_size": 65535, 00:17:56.796 "bdev_io_cache_size": 256, 00:17:56.796 "bdev_auto_examine": true, 00:17:56.796 "iobuf_small_cache_size": 128, 00:17:56.796 "iobuf_large_cache_size": 16 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_raid_set_options", 00:17:56.796 "params": { 00:17:56.796 "process_window_size_kb": 1024 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_iscsi_set_options", 00:17:56.796 "params": { 00:17:56.796 "timeout_sec": 30 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_nvme_set_options", 00:17:56.796 "params": { 00:17:56.796 "action_on_timeout": "none", 00:17:56.796 "timeout_us": 0, 00:17:56.796 "timeout_admin_us": 0, 00:17:56.796 "keep_alive_timeout_ms": 10000, 00:17:56.796 "arbitration_burst": 0, 00:17:56.796 "low_priority_weight": 0, 00:17:56.796 "medium_priority_weight": 0, 00:17:56.796 "high_priority_weight": 0, 00:17:56.796 "nvme_adminq_poll_period_us": 10000, 00:17:56.796 "nvme_ioq_poll_period_us": 0, 00:17:56.796 "io_queue_requests": 0, 00:17:56.796 "delay_cmd_submit": true, 00:17:56.796 "transport_retry_count": 4, 00:17:56.796 "bdev_retry_count": 3, 00:17:56.796 "transport_ack_timeout": 0, 00:17:56.796 "ctrlr_loss_timeout_sec": 0, 00:17:56.796 "reconnect_delay_sec": 0, 00:17:56.796 "fast_io_fail_timeout_sec": 0, 00:17:56.796 "disable_auto_failback": false, 00:17:56.796 "generate_uuids": false, 00:17:56.796 "transport_tos": 0, 00:17:56.796 "nvme_error_stat": false, 00:17:56.796 "rdma_srq_size": 0, 00:17:56.796 "io_path_stat": false, 00:17:56.796 "allow_accel_sequence": false, 00:17:56.796 "rdma_max_cq_size": 0, 00:17:56.796 "rdma_cm_event_timeout_ms": 0, 00:17:56.796 "dhchap_digests": [ 00:17:56.796 "sha256", 00:17:56.796 "sha384", 00:17:56.796 "sha512" 00:17:56.796 ], 00:17:56.796 "dhchap_dhgroups": [ 00:17:56.796 "null", 00:17:56.796 "ffdhe2048", 00:17:56.796 "ffdhe3072", 00:17:56.796 "ffdhe4096", 00:17:56.796 "ffdhe6144", 00:17:56.796 "ffdhe8192" 00:17:56.796 ] 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_nvme_set_hotplug", 00:17:56.796 "params": { 00:17:56.796 "period_us": 100000, 00:17:56.796 "enable": false 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_malloc_create", 00:17:56.796 "params": { 00:17:56.796 "name": "malloc0", 00:17:56.796 "num_blocks": 8192, 00:17:56.796 "block_size": 4096, 00:17:56.796 "physical_block_size": 4096, 00:17:56.796 "uuid": "a3c6064c-9454-46d9-bd8b-c2007a70dfc2", 00:17:56.796 "optimal_io_boundary": 0 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "bdev_wait_for_examine" 00:17:56.796 } 00:17:56.796 ] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "nbd", 00:17:56.796 "config": [] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "scheduler", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "framework_set_scheduler", 00:17:56.796 "params": { 00:17:56.796 "name": "static" 00:17:56.796 } 00:17:56.796 } 00:17:56.796 ] 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "subsystem": "nvmf", 00:17:56.796 "config": [ 00:17:56.796 { 00:17:56.796 "method": "nvmf_set_config", 00:17:56.796 "params": { 00:17:56.796 "discovery_filter": "match_any", 00:17:56.796 "admin_cmd_passthru": { 00:17:56.796 "identify_ctrlr": false 00:17:56.796 } 00:17:56.796 } 00:17:56.796 }, 00:17:56.796 { 00:17:56.796 "method": "nvmf_set_max_subsystems", 00:17:56.796 "params": { 00:17:56.797 "max_subsystems": 1024 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_set_crdt", 00:17:56.797 "params": { 00:17:56.797 "crdt1": 0, 00:17:56.797 "crdt2": 0, 00:17:56.797 "crdt3": 0 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_create_transport", 00:17:56.797 "params": { 00:17:56.797 "trtype": "TCP", 00:17:56.797 "max_queue_depth": 128, 00:17:56.797 "max_io_qpairs_per_ctrlr": 127, 00:17:56.797 "in_capsule_data_size": 4096, 00:17:56.797 "max_io_size": 131072, 00:17:56.797 "io_unit_size": 131072, 00:17:56.797 "max_aq_depth": 128, 00:17:56.797 "num_shared_buffers": 511, 00:17:56.797 "buf_cache_size": 4294967295, 00:17:56.797 "dif_insert_or_strip": false, 00:17:56.797 "zcopy": false, 00:17:56.797 "c2h_success": false, 00:17:56.797 "sock_priority": 0, 00:17:56.797 "abort_timeout_sec": 1, 00:17:56.797 "ack_timeout": 0, 00:17:56.797 "data_wr_pool_size": 0 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_create_subsystem", 00:17:56.797 "params": { 00:17:56.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.797 "allow_any_host": false, 00:17:56.797 "serial_number": "SPDK00000000000001", 00:17:56.797 "model_number": "SPDK bdev Controller", 00:17:56.797 "max_namespaces": 10, 00:17:56.797 "min_cntlid": 1, 00:17:56.797 "max_cntlid": 65519, 00:17:56.797 "ana_reporting": false 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_subsystem_add_host", 00:17:56.797 "params": { 00:17:56.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.797 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.797 "psk": "/tmp/tmp.UD401VpRKp" 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_subsystem_add_ns", 00:17:56.797 "params": { 00:17:56.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.797 "namespace": { 00:17:56.797 "nsid": 1, 00:17:56.797 "bdev_name": "malloc0", 00:17:56.797 "nguid": "A3C6064C945446D9BD8BC2007A70DFC2", 00:17:56.797 "uuid": "a3c6064c-9454-46d9-bd8b-c2007a70dfc2", 00:17:56.797 "no_auto_visible": false 00:17:56.797 } 00:17:56.797 } 00:17:56.797 }, 00:17:56.797 { 00:17:56.797 "method": "nvmf_subsystem_add_listener", 00:17:56.797 "params": { 00:17:56.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.797 "listen_address": { 00:17:56.797 "trtype": "TCP", 00:17:56.797 "adrfam": "IPv4", 00:17:56.797 "traddr": "10.0.0.2", 00:17:56.797 "trsvcid": "4420" 00:17:56.797 }, 00:17:56.797 "secure_channel": true 00:17:56.797 } 00:17:56.797 } 00:17:56.797 ] 00:17:56.797 } 00:17:56.797 ] 00:17:56.797 }' 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3598480 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3598480 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3598480 ']' 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:56.797 23:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.055 [2024-05-14 23:59:57.402477] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:57.055 [2024-05-14 23:59:57.402527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.055 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.055 [2024-05-14 23:59:57.475877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.055 [2024-05-14 23:59:57.542170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.055 [2024-05-14 23:59:57.542217] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.055 [2024-05-14 23:59:57.542226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.055 [2024-05-14 23:59:57.542234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.055 [2024-05-14 23:59:57.542257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.055 [2024-05-14 23:59:57.542320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.313 [2024-05-14 23:59:57.737343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.313 [2024-05-14 23:59:57.753297] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:57.313 [2024-05-14 23:59:57.769322] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:57.313 [2024-05-14 23:59:57.769384] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.313 [2024-05-14 23:59:57.779595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3598698 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3598698 /var/tmp/bdevperf.sock 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3598698 ']' 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.941 23:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:57.941 "subsystems": [ 00:17:57.941 { 00:17:57.941 "subsystem": "keyring", 00:17:57.941 "config": [] 00:17:57.941 }, 00:17:57.941 { 00:17:57.941 "subsystem": "iobuf", 00:17:57.941 "config": [ 00:17:57.941 { 00:17:57.941 "method": "iobuf_set_options", 00:17:57.941 "params": { 00:17:57.941 "small_pool_count": 8192, 00:17:57.941 "large_pool_count": 1024, 00:17:57.941 "small_bufsize": 8192, 00:17:57.941 "large_bufsize": 135168 00:17:57.941 } 00:17:57.941 } 00:17:57.941 ] 00:17:57.941 }, 00:17:57.941 { 00:17:57.941 "subsystem": "sock", 00:17:57.941 "config": [ 00:17:57.941 { 00:17:57.941 "method": "sock_impl_set_options", 00:17:57.941 "params": { 00:17:57.941 "impl_name": "posix", 00:17:57.941 "recv_buf_size": 2097152, 00:17:57.942 "send_buf_size": 2097152, 00:17:57.942 "enable_recv_pipe": true, 00:17:57.942 "enable_quickack": false, 00:17:57.942 "enable_placement_id": 0, 00:17:57.942 "enable_zerocopy_send_server": true, 00:17:57.942 "enable_zerocopy_send_client": false, 00:17:57.942 "zerocopy_threshold": 0, 00:17:57.942 "tls_version": 0, 00:17:57.942 "enable_ktls": false 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "sock_impl_set_options", 00:17:57.942 "params": { 00:17:57.942 "impl_name": "ssl", 00:17:57.942 "recv_buf_size": 4096, 00:17:57.942 "send_buf_size": 4096, 00:17:57.942 "enable_recv_pipe": true, 00:17:57.942 "enable_quickack": false, 00:17:57.942 "enable_placement_id": 0, 00:17:57.942 "enable_zerocopy_send_server": true, 00:17:57.942 "enable_zerocopy_send_client": false, 00:17:57.942 "zerocopy_threshold": 0, 00:17:57.942 "tls_version": 0, 00:17:57.942 "enable_ktls": false 00:17:57.942 } 00:17:57.942 } 00:17:57.942 ] 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "subsystem": "vmd", 00:17:57.942 "config": [] 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "subsystem": "accel", 00:17:57.942 "config": [ 00:17:57.942 { 00:17:57.942 "method": "accel_set_options", 00:17:57.942 "params": { 00:17:57.942 "small_cache_size": 128, 00:17:57.942 "large_cache_size": 16, 00:17:57.942 "task_count": 2048, 00:17:57.942 "sequence_count": 2048, 00:17:57.942 "buf_count": 2048 00:17:57.942 } 00:17:57.942 } 00:17:57.942 ] 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "subsystem": "bdev", 00:17:57.942 "config": [ 00:17:57.942 { 00:17:57.942 "method": "bdev_set_options", 00:17:57.942 "params": { 00:17:57.942 "bdev_io_pool_size": 65535, 00:17:57.942 "bdev_io_cache_size": 256, 00:17:57.942 "bdev_auto_examine": true, 00:17:57.942 "iobuf_small_cache_size": 128, 00:17:57.942 "iobuf_large_cache_size": 16 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_raid_set_options", 00:17:57.942 "params": { 00:17:57.942 "process_window_size_kb": 1024 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_iscsi_set_options", 00:17:57.942 "params": { 00:17:57.942 "timeout_sec": 30 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_nvme_set_options", 00:17:57.942 "params": { 00:17:57.942 "action_on_timeout": "none", 00:17:57.942 "timeout_us": 0, 00:17:57.942 "timeout_admin_us": 0, 00:17:57.942 "keep_alive_timeout_ms": 10000, 00:17:57.942 "arbitration_burst": 0, 00:17:57.942 "low_priority_weight": 0, 00:17:57.942 "medium_priority_weight": 0, 00:17:57.942 "high_priority_weight": 0, 00:17:57.942 "nvme_adminq_poll_period_us": 10000, 00:17:57.942 "nvme_ioq_poll_period_us": 0, 00:17:57.942 "io_queue_requests": 512, 00:17:57.942 "delay_cmd_submit": true, 00:17:57.942 "transport_retry_count": 4, 00:17:57.942 "bdev_retry_count": 3, 00:17:57.942 "transport_ack_timeout": 0, 00:17:57.942 "ctrlr_loss_timeout_sec": 0, 00:17:57.942 "reconnect_delay_sec": 0, 00:17:57.942 "fast_io_fail_timeout_sec": 0, 00:17:57.942 "disable_auto_failback": false, 00:17:57.942 "generate_uuids": false, 00:17:57.942 "transport_tos": 0, 00:17:57.942 "nvme_error_stat": false, 00:17:57.942 "rdma_srq_size": 0, 00:17:57.942 "io_path_stat": false, 00:17:57.942 "allow_accel_sequence": false, 00:17:57.942 "rdma_max_cq_size": 0, 00:17:57.942 "rdma_cm_event_timeout_ms": 0, 00:17:57.942 "dhchap_digests": [ 00:17:57.942 "sha256", 00:17:57.942 "sha384", 00:17:57.942 "sha512" 00:17:57.942 ], 00:17:57.942 "dhchap_dhgroups": [ 00:17:57.942 "null", 00:17:57.942 "ffdhe2048", 00:17:57.942 "ffdhe3072", 00:17:57.942 "ffdhe4096", 00:17:57.942 "ffdhe6144", 00:17:57.942 "ffdhe8192" 00:17:57.942 ] 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_nvme_attach_controller", 00:17:57.942 "params": { 00:17:57.942 "name": "TLSTEST", 00:17:57.942 "trtype": "TCP", 00:17:57.942 "adrfam": "IPv4", 00:17:57.942 "traddr": "10.0.0.2", 00:17:57.942 "trsvcid": "4420", 00:17:57.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.942 "prchk_reftag": false, 00:17:57.942 "prchk_guard": false, 00:17:57.942 "ctrlr_loss_timeout_sec": 0, 00:17:57.942 "reconnect_delay_sec": 0, 00:17:57.942 "fast_io_fail_timeout_sec": 0, 00:17:57.942 "psk": "/tmp/tmp.UD401VpRKp", 00:17:57.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.942 "hdgst": false, 00:17:57.942 "ddgst": false 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_nvme_set_hotplug", 00:17:57.942 "params": { 00:17:57.942 "period_us": 100000, 00:17:57.942 "enable": false 00:17:57.942 } 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "method": "bdev_wait_for_examine" 00:17:57.942 } 00:17:57.942 ] 00:17:57.942 }, 00:17:57.942 { 00:17:57.942 "subsystem": "nbd", 00:17:57.942 "config": [] 00:17:57.942 } 00:17:57.942 ] 00:17:57.942 }' 00:17:57.942 23:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.942 [2024-05-14 23:59:58.278600] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:17:57.942 [2024-05-14 23:59:58.278654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598698 ] 00:17:57.942 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.942 [2024-05-14 23:59:58.361606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.942 [2024-05-14 23:59:58.434608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.200 [2024-05-14 23:59:58.569491] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.200 [2024-05-14 23:59:58.569589] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.795 23:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:58.795 23:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:58.795 23:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.795 Running I/O for 10 seconds... 00:18:08.760 00:18:08.760 Latency(us) 00:18:08.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.760 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.760 Verification LBA range: start 0x0 length 0x2000 00:18:08.760 TLSTESTn1 : 10.05 2109.96 8.24 0.00 0.00 60516.90 6920.60 114085.07 00:18:08.760 =================================================================================================================== 00:18:08.760 Total : 2109.96 8.24 0.00 0.00 60516.90 6920.60 114085.07 00:18:08.760 0 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3598698 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3598698 ']' 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3598698 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3598698 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3598698' 00:18:08.760 killing process with pid 3598698 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3598698 00:18:08.760 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.760 00:18:08.760 Latency(us) 00:18:08.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.760 =================================================================================================================== 00:18:08.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.760 [2024-05-15 00:00:09.300113] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.760 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3598698 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3598480 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3598480 ']' 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3598480 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3598480 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3598480' 00:18:09.024 killing process with pid 3598480 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3598480 00:18:09.024 [2024-05-15 00:00:09.554082] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:09.024 [2024-05-15 00:00:09.554118] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:09.024 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3598480 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3601134 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3601134 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3601134 ']' 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:09.284 00:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.284 [2024-05-15 00:00:09.819895] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:09.284 [2024-05-15 00:00:09.819945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.285 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.548 [2024-05-15 00:00:09.895045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.548 [2024-05-15 00:00:09.966980] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.548 [2024-05-15 00:00:09.967025] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.548 [2024-05-15 00:00:09.967035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.548 [2024-05-15 00:00:09.967044] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.548 [2024-05-15 00:00:09.967051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.548 [2024-05-15 00:00:09.967074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.124 00:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UD401VpRKp 00:18:10.125 00:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UD401VpRKp 00:18:10.125 00:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.384 [2024-05-15 00:00:10.807306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.384 00:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.650 00:00:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.650 [2024-05-15 00:00:11.144145] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:10.650 [2024-05-15 00:00:11.144225] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.650 [2024-05-15 00:00:11.144424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.650 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.915 malloc0 00:18:10.915 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.915 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UD401VpRKp 00:18:11.187 [2024-05-15 00:00:11.637791] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3601510 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3601510 /var/tmp/bdevperf.sock 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3601510 ']' 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.187 00:00:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.187 [2024-05-15 00:00:11.703555] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:11.187 [2024-05-15 00:00:11.703606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601510 ] 00:18:11.187 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.187 [2024-05-15 00:00:11.774898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.453 [2024-05-15 00:00:11.849613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.058 00:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.058 00:00:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:12.058 00:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UD401VpRKp 00:18:12.323 00:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:12.323 [2024-05-15 00:00:12.813572] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.323 nvme0n1 00:18:12.323 00:00:12 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.587 Running I/O for 1 seconds... 00:18:13.539 00:18:13.539 Latency(us) 00:18:13.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.539 Verification LBA range: start 0x0 length 0x2000 00:18:13.539 nvme0n1 : 1.06 1723.52 6.73 0.00 0.00 72510.27 6291.46 109051.90 00:18:13.539 =================================================================================================================== 00:18:13.539 Total : 1723.52 6.73 0.00 0.00 72510.27 6291.46 109051.90 00:18:13.539 0 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3601510 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3601510 ']' 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3601510 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.539 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3601510 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3601510' 00:18:13.798 killing process with pid 3601510 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3601510 00:18:13.798 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.798 00:18:13.798 Latency(us) 00:18:13.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.798 =================================================================================================================== 00:18:13.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3601510 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3601134 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3601134 ']' 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3601134 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3601134 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3601134' 00:18:13.798 killing process with pid 3601134 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3601134 00:18:13.798 [2024-05-15 00:00:14.387906] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:13.798 [2024-05-15 00:00:14.387949] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:13.798 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3601134 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3601974 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3601974 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3601974 ']' 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:14.057 00:00:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.319 [2024-05-15 00:00:14.656502] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:14.319 [2024-05-15 00:00:14.656554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.319 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.319 [2024-05-15 00:00:14.730275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.319 [2024-05-15 00:00:14.796533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.319 [2024-05-15 00:00:14.796574] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.319 [2024-05-15 00:00:14.796583] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.319 [2024-05-15 00:00:14.796591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.319 [2024-05-15 00:00:14.796615] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.319 [2024-05-15 00:00:14.796643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.890 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:14.890 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:14.890 00:00:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.890 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.890 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.147 [2024-05-15 00:00:15.527169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.147 malloc0 00:18:15.147 [2024-05-15 00:00:15.555745] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:15.147 [2024-05-15 00:00:15.555816] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.147 [2024-05-15 00:00:15.556019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3602254 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3602254 /var/tmp/bdevperf.sock 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3602254 ']' 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:15.147 00:00:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.147 [2024-05-15 00:00:15.631666] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:15.147 [2024-05-15 00:00:15.631714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602254 ] 00:18:15.147 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.147 [2024-05-15 00:00:15.699784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.404 [2024-05-15 00:00:15.779092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.969 00:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:15.969 00:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:15.969 00:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UD401VpRKp 00:18:16.227 00:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.227 [2024-05-15 00:00:16.762997] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.485 nvme0n1 00:18:16.485 00:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:16.485 Running I/O for 1 seconds... 00:18:17.426 00:18:17.426 Latency(us) 00:18:17.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.426 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:17.426 Verification LBA range: start 0x0 length 0x2000 00:18:17.426 nvme0n1 : 1.05 1689.46 6.60 0.00 0.00 74333.38 5190.45 122473.68 00:18:17.426 =================================================================================================================== 00:18:17.426 Total : 1689.46 6.60 0.00 0.00 74333.38 5190.45 122473.68 00:18:17.427 0 00:18:17.687 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:17.687 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.687 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.687 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.687 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:17.687 "subsystems": [ 00:18:17.687 { 00:18:17.687 "subsystem": "keyring", 00:18:17.687 "config": [ 00:18:17.687 { 00:18:17.687 "method": "keyring_file_add_key", 00:18:17.687 "params": { 00:18:17.687 "name": "key0", 00:18:17.687 "path": "/tmp/tmp.UD401VpRKp" 00:18:17.687 } 00:18:17.687 } 00:18:17.687 ] 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "subsystem": "iobuf", 00:18:17.687 "config": [ 00:18:17.687 { 00:18:17.687 "method": "iobuf_set_options", 00:18:17.687 "params": { 00:18:17.687 "small_pool_count": 8192, 00:18:17.687 "large_pool_count": 1024, 00:18:17.687 "small_bufsize": 8192, 00:18:17.687 "large_bufsize": 135168 00:18:17.687 } 00:18:17.687 } 00:18:17.687 ] 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "subsystem": "sock", 00:18:17.687 "config": [ 00:18:17.687 { 00:18:17.687 "method": "sock_impl_set_options", 00:18:17.687 "params": { 00:18:17.687 "impl_name": "posix", 00:18:17.687 "recv_buf_size": 2097152, 00:18:17.687 "send_buf_size": 2097152, 00:18:17.687 "enable_recv_pipe": true, 00:18:17.687 "enable_quickack": false, 00:18:17.687 "enable_placement_id": 0, 00:18:17.687 "enable_zerocopy_send_server": true, 00:18:17.687 "enable_zerocopy_send_client": false, 00:18:17.687 "zerocopy_threshold": 0, 00:18:17.687 "tls_version": 0, 00:18:17.687 "enable_ktls": false 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "sock_impl_set_options", 00:18:17.687 "params": { 00:18:17.687 "impl_name": "ssl", 00:18:17.687 "recv_buf_size": 4096, 00:18:17.687 "send_buf_size": 4096, 00:18:17.687 "enable_recv_pipe": true, 00:18:17.687 "enable_quickack": false, 00:18:17.687 "enable_placement_id": 0, 00:18:17.687 "enable_zerocopy_send_server": true, 00:18:17.687 "enable_zerocopy_send_client": false, 00:18:17.687 "zerocopy_threshold": 0, 00:18:17.687 "tls_version": 0, 00:18:17.687 "enable_ktls": false 00:18:17.687 } 00:18:17.687 } 00:18:17.687 ] 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "subsystem": "vmd", 00:18:17.687 "config": [] 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "subsystem": "accel", 00:18:17.687 "config": [ 00:18:17.687 { 00:18:17.687 "method": "accel_set_options", 00:18:17.687 "params": { 00:18:17.687 "small_cache_size": 128, 00:18:17.687 "large_cache_size": 16, 00:18:17.687 "task_count": 2048, 00:18:17.687 "sequence_count": 2048, 00:18:17.687 "buf_count": 2048 00:18:17.687 } 00:18:17.687 } 00:18:17.687 ] 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "subsystem": "bdev", 00:18:17.687 "config": [ 00:18:17.687 { 00:18:17.687 "method": "bdev_set_options", 00:18:17.687 "params": { 00:18:17.687 "bdev_io_pool_size": 65535, 00:18:17.687 "bdev_io_cache_size": 256, 00:18:17.687 "bdev_auto_examine": true, 00:18:17.687 "iobuf_small_cache_size": 128, 00:18:17.687 "iobuf_large_cache_size": 16 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_raid_set_options", 00:18:17.687 "params": { 00:18:17.687 "process_window_size_kb": 1024 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_iscsi_set_options", 00:18:17.687 "params": { 00:18:17.687 "timeout_sec": 30 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_nvme_set_options", 00:18:17.687 "params": { 00:18:17.687 "action_on_timeout": "none", 00:18:17.687 "timeout_us": 0, 00:18:17.687 "timeout_admin_us": 0, 00:18:17.687 "keep_alive_timeout_ms": 10000, 00:18:17.687 "arbitration_burst": 0, 00:18:17.687 "low_priority_weight": 0, 00:18:17.687 "medium_priority_weight": 0, 00:18:17.687 "high_priority_weight": 0, 00:18:17.687 "nvme_adminq_poll_period_us": 10000, 00:18:17.687 "nvme_ioq_poll_period_us": 0, 00:18:17.687 "io_queue_requests": 0, 00:18:17.687 "delay_cmd_submit": true, 00:18:17.687 "transport_retry_count": 4, 00:18:17.687 "bdev_retry_count": 3, 00:18:17.687 "transport_ack_timeout": 0, 00:18:17.687 "ctrlr_loss_timeout_sec": 0, 00:18:17.687 "reconnect_delay_sec": 0, 00:18:17.687 "fast_io_fail_timeout_sec": 0, 00:18:17.687 "disable_auto_failback": false, 00:18:17.687 "generate_uuids": false, 00:18:17.687 "transport_tos": 0, 00:18:17.687 "nvme_error_stat": false, 00:18:17.687 "rdma_srq_size": 0, 00:18:17.687 "io_path_stat": false, 00:18:17.687 "allow_accel_sequence": false, 00:18:17.687 "rdma_max_cq_size": 0, 00:18:17.687 "rdma_cm_event_timeout_ms": 0, 00:18:17.687 "dhchap_digests": [ 00:18:17.687 "sha256", 00:18:17.687 "sha384", 00:18:17.687 "sha512" 00:18:17.687 ], 00:18:17.687 "dhchap_dhgroups": [ 00:18:17.687 "null", 00:18:17.687 "ffdhe2048", 00:18:17.687 "ffdhe3072", 00:18:17.687 "ffdhe4096", 00:18:17.687 "ffdhe6144", 00:18:17.687 "ffdhe8192" 00:18:17.687 ] 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_nvme_set_hotplug", 00:18:17.687 "params": { 00:18:17.687 "period_us": 100000, 00:18:17.687 "enable": false 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_malloc_create", 00:18:17.687 "params": { 00:18:17.687 "name": "malloc0", 00:18:17.687 "num_blocks": 8192, 00:18:17.687 "block_size": 4096, 00:18:17.687 "physical_block_size": 4096, 00:18:17.687 "uuid": "13b1e9f8-63b1-46d7-a3c7-10a7f015aeea", 00:18:17.687 "optimal_io_boundary": 0 00:18:17.687 } 00:18:17.687 }, 00:18:17.687 { 00:18:17.687 "method": "bdev_wait_for_examine" 00:18:17.687 } 00:18:17.687 ] 00:18:17.687 }, 00:18:17.688 { 00:18:17.688 "subsystem": "nbd", 00:18:17.688 "config": [] 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "subsystem": "scheduler", 00:18:17.688 "config": [ 00:18:17.688 { 00:18:17.688 "method": "framework_set_scheduler", 00:18:17.688 "params": { 00:18:17.688 "name": "static" 00:18:17.688 } 00:18:17.688 } 00:18:17.688 ] 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "subsystem": "nvmf", 00:18:17.688 "config": [ 00:18:17.688 { 00:18:17.688 "method": "nvmf_set_config", 00:18:17.688 "params": { 00:18:17.688 "discovery_filter": "match_any", 00:18:17.688 "admin_cmd_passthru": { 00:18:17.688 "identify_ctrlr": false 00:18:17.688 } 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_set_max_subsystems", 00:18:17.688 "params": { 00:18:17.688 "max_subsystems": 1024 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_set_crdt", 00:18:17.688 "params": { 00:18:17.688 "crdt1": 0, 00:18:17.688 "crdt2": 0, 00:18:17.688 "crdt3": 0 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_create_transport", 00:18:17.688 "params": { 00:18:17.688 "trtype": "TCP", 00:18:17.688 "max_queue_depth": 128, 00:18:17.688 "max_io_qpairs_per_ctrlr": 127, 00:18:17.688 "in_capsule_data_size": 4096, 00:18:17.688 "max_io_size": 131072, 00:18:17.688 "io_unit_size": 131072, 00:18:17.688 "max_aq_depth": 128, 00:18:17.688 "num_shared_buffers": 511, 00:18:17.688 "buf_cache_size": 4294967295, 00:18:17.688 "dif_insert_or_strip": false, 00:18:17.688 "zcopy": false, 00:18:17.688 "c2h_success": false, 00:18:17.688 "sock_priority": 0, 00:18:17.688 "abort_timeout_sec": 1, 00:18:17.688 "ack_timeout": 0, 00:18:17.688 "data_wr_pool_size": 0 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_create_subsystem", 00:18:17.688 "params": { 00:18:17.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.688 "allow_any_host": false, 00:18:17.688 "serial_number": "00000000000000000000", 00:18:17.688 "model_number": "SPDK bdev Controller", 00:18:17.688 "max_namespaces": 32, 00:18:17.688 "min_cntlid": 1, 00:18:17.688 "max_cntlid": 65519, 00:18:17.688 "ana_reporting": false 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_subsystem_add_host", 00:18:17.688 "params": { 00:18:17.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.688 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.688 "psk": "key0" 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_subsystem_add_ns", 00:18:17.688 "params": { 00:18:17.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.688 "namespace": { 00:18:17.688 "nsid": 1, 00:18:17.688 "bdev_name": "malloc0", 00:18:17.688 "nguid": "13B1E9F863B146D7A3C710A7F015AEEA", 00:18:17.688 "uuid": "13b1e9f8-63b1-46d7-a3c7-10a7f015aeea", 00:18:17.688 "no_auto_visible": false 00:18:17.688 } 00:18:17.688 } 00:18:17.688 }, 00:18:17.688 { 00:18:17.688 "method": "nvmf_subsystem_add_listener", 00:18:17.688 "params": { 00:18:17.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.688 "listen_address": { 00:18:17.688 "trtype": "TCP", 00:18:17.688 "adrfam": "IPv4", 00:18:17.688 "traddr": "10.0.0.2", 00:18:17.688 "trsvcid": "4420" 00:18:17.688 }, 00:18:17.688 "secure_channel": true 00:18:17.688 } 00:18:17.688 } 00:18:17.688 ] 00:18:17.688 } 00:18:17.688 ] 00:18:17.688 }' 00:18:17.688 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:17.946 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:17.946 "subsystems": [ 00:18:17.946 { 00:18:17.946 "subsystem": "keyring", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "keyring_file_add_key", 00:18:17.946 "params": { 00:18:17.946 "name": "key0", 00:18:17.946 "path": "/tmp/tmp.UD401VpRKp" 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "iobuf", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "iobuf_set_options", 00:18:17.946 "params": { 00:18:17.946 "small_pool_count": 8192, 00:18:17.946 "large_pool_count": 1024, 00:18:17.946 "small_bufsize": 8192, 00:18:17.946 "large_bufsize": 135168 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "sock", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "sock_impl_set_options", 00:18:17.946 "params": { 00:18:17.946 "impl_name": "posix", 00:18:17.946 "recv_buf_size": 2097152, 00:18:17.946 "send_buf_size": 2097152, 00:18:17.946 "enable_recv_pipe": true, 00:18:17.946 "enable_quickack": false, 00:18:17.946 "enable_placement_id": 0, 00:18:17.946 "enable_zerocopy_send_server": true, 00:18:17.946 "enable_zerocopy_send_client": false, 00:18:17.946 "zerocopy_threshold": 0, 00:18:17.946 "tls_version": 0, 00:18:17.946 "enable_ktls": false 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "sock_impl_set_options", 00:18:17.946 "params": { 00:18:17.946 "impl_name": "ssl", 00:18:17.946 "recv_buf_size": 4096, 00:18:17.946 "send_buf_size": 4096, 00:18:17.946 "enable_recv_pipe": true, 00:18:17.946 "enable_quickack": false, 00:18:17.946 "enable_placement_id": 0, 00:18:17.946 "enable_zerocopy_send_server": true, 00:18:17.946 "enable_zerocopy_send_client": false, 00:18:17.946 "zerocopy_threshold": 0, 00:18:17.946 "tls_version": 0, 00:18:17.946 "enable_ktls": false 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "vmd", 00:18:17.946 "config": [] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "accel", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "accel_set_options", 00:18:17.946 "params": { 00:18:17.946 "small_cache_size": 128, 00:18:17.946 "large_cache_size": 16, 00:18:17.946 "task_count": 2048, 00:18:17.946 "sequence_count": 2048, 00:18:17.946 "buf_count": 2048 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.947 "subsystem": "bdev", 00:18:17.947 "config": [ 00:18:17.947 { 00:18:17.947 "method": "bdev_set_options", 00:18:17.947 "params": { 00:18:17.947 "bdev_io_pool_size": 65535, 00:18:17.947 "bdev_io_cache_size": 256, 00:18:17.947 "bdev_auto_examine": true, 00:18:17.947 "iobuf_small_cache_size": 128, 00:18:17.947 "iobuf_large_cache_size": 16 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_raid_set_options", 00:18:17.947 "params": { 00:18:17.947 "process_window_size_kb": 1024 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_iscsi_set_options", 00:18:17.947 "params": { 00:18:17.947 "timeout_sec": 30 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_nvme_set_options", 00:18:17.947 "params": { 00:18:17.947 "action_on_timeout": "none", 00:18:17.947 "timeout_us": 0, 00:18:17.947 "timeout_admin_us": 0, 00:18:17.947 "keep_alive_timeout_ms": 10000, 00:18:17.947 "arbitration_burst": 0, 00:18:17.947 "low_priority_weight": 0, 00:18:17.947 "medium_priority_weight": 0, 00:18:17.947 "high_priority_weight": 0, 00:18:17.947 "nvme_adminq_poll_period_us": 10000, 00:18:17.947 "nvme_ioq_poll_period_us": 0, 00:18:17.947 "io_queue_requests": 512, 00:18:17.947 "delay_cmd_submit": true, 00:18:17.947 "transport_retry_count": 4, 00:18:17.947 "bdev_retry_count": 3, 00:18:17.947 "transport_ack_timeout": 0, 00:18:17.947 "ctrlr_loss_timeout_sec": 0, 00:18:17.947 "reconnect_delay_sec": 0, 00:18:17.947 "fast_io_fail_timeout_sec": 0, 00:18:17.947 "disable_auto_failback": false, 00:18:17.947 "generate_uuids": false, 00:18:17.947 "transport_tos": 0, 00:18:17.947 "nvme_error_stat": false, 00:18:17.947 "rdma_srq_size": 0, 00:18:17.947 "io_path_stat": false, 00:18:17.947 "allow_accel_sequence": false, 00:18:17.947 "rdma_max_cq_size": 0, 00:18:17.947 "rdma_cm_event_timeout_ms": 0, 00:18:17.947 "dhchap_digests": [ 00:18:17.947 "sha256", 00:18:17.947 "sha384", 00:18:17.947 "sha512" 00:18:17.947 ], 00:18:17.947 "dhchap_dhgroups": [ 00:18:17.947 "null", 00:18:17.947 "ffdhe2048", 00:18:17.947 "ffdhe3072", 00:18:17.947 "ffdhe4096", 00:18:17.947 "ffdhe6144", 00:18:17.947 "ffdhe8192" 00:18:17.947 ] 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_nvme_attach_controller", 00:18:17.947 "params": { 00:18:17.947 "name": "nvme0", 00:18:17.947 "trtype": "TCP", 00:18:17.947 "adrfam": "IPv4", 00:18:17.947 "traddr": "10.0.0.2", 00:18:17.947 "trsvcid": "4420", 00:18:17.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.947 "prchk_reftag": false, 00:18:17.947 "prchk_guard": false, 00:18:17.947 "ctrlr_loss_timeout_sec": 0, 00:18:17.947 "reconnect_delay_sec": 0, 00:18:17.947 "fast_io_fail_timeout_sec": 0, 00:18:17.947 "psk": "key0", 00:18:17.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.947 "hdgst": false, 00:18:17.947 "ddgst": false 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_nvme_set_hotplug", 00:18:17.947 "params": { 00:18:17.947 "period_us": 100000, 00:18:17.947 "enable": false 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_enable_histogram", 00:18:17.947 "params": { 00:18:17.947 "name": "nvme0n1", 00:18:17.947 "enable": true 00:18:17.947 } 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "method": "bdev_wait_for_examine" 00:18:17.947 } 00:18:17.947 ] 00:18:17.947 }, 00:18:17.947 { 00:18:17.947 "subsystem": "nbd", 00:18:17.947 "config": [] 00:18:17.947 } 00:18:17.947 ] 00:18:17.947 }' 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3602254 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3602254 ']' 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3602254 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3602254 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3602254' 00:18:17.947 killing process with pid 3602254 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3602254 00:18:17.947 Received shutdown signal, test time was about 1.000000 seconds 00:18:17.947 00:18:17.947 Latency(us) 00:18:17.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.947 =================================================================================================================== 00:18:17.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.947 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3602254 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3601974 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3601974 ']' 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3601974 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3601974 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3601974' 00:18:18.205 killing process with pid 3601974 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3601974 00:18:18.205 [2024-05-15 00:00:18.711370] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:18.205 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3601974 00:18:18.463 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:18.463 00:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.463 00:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:18.463 "subsystems": [ 00:18:18.463 { 00:18:18.463 "subsystem": "keyring", 00:18:18.463 "config": [ 00:18:18.463 { 00:18:18.463 "method": "keyring_file_add_key", 00:18:18.463 "params": { 00:18:18.463 "name": "key0", 00:18:18.463 "path": "/tmp/tmp.UD401VpRKp" 00:18:18.463 } 00:18:18.463 } 00:18:18.463 ] 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "subsystem": "iobuf", 00:18:18.463 "config": [ 00:18:18.463 { 00:18:18.463 "method": "iobuf_set_options", 00:18:18.463 "params": { 00:18:18.463 "small_pool_count": 8192, 00:18:18.463 "large_pool_count": 1024, 00:18:18.463 "small_bufsize": 8192, 00:18:18.463 "large_bufsize": 135168 00:18:18.463 } 00:18:18.463 } 00:18:18.463 ] 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "subsystem": "sock", 00:18:18.463 "config": [ 00:18:18.463 { 00:18:18.463 "method": "sock_impl_set_options", 00:18:18.463 "params": { 00:18:18.463 "impl_name": "posix", 00:18:18.463 "recv_buf_size": 2097152, 00:18:18.463 "send_buf_size": 2097152, 00:18:18.463 "enable_recv_pipe": true, 00:18:18.463 "enable_quickack": false, 00:18:18.463 "enable_placement_id": 0, 00:18:18.463 "enable_zerocopy_send_server": true, 00:18:18.463 "enable_zerocopy_send_client": false, 00:18:18.463 "zerocopy_threshold": 0, 00:18:18.463 "tls_version": 0, 00:18:18.463 "enable_ktls": false 00:18:18.463 } 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "method": "sock_impl_set_options", 00:18:18.463 "params": { 00:18:18.463 "impl_name": "ssl", 00:18:18.463 "recv_buf_size": 4096, 00:18:18.463 "send_buf_size": 4096, 00:18:18.463 "enable_recv_pipe": true, 00:18:18.463 "enable_quickack": false, 00:18:18.463 "enable_placement_id": 0, 00:18:18.463 "enable_zerocopy_send_server": true, 00:18:18.463 "enable_zerocopy_send_client": false, 00:18:18.463 "zerocopy_threshold": 0, 00:18:18.463 "tls_version": 0, 00:18:18.463 "enable_ktls": false 00:18:18.463 } 00:18:18.463 } 00:18:18.463 ] 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "subsystem": "vmd", 00:18:18.463 "config": [] 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "subsystem": "accel", 00:18:18.463 "config": [ 00:18:18.463 { 00:18:18.463 "method": "accel_set_options", 00:18:18.463 "params": { 00:18:18.463 "small_cache_size": 128, 00:18:18.463 "large_cache_size": 16, 00:18:18.463 "task_count": 2048, 00:18:18.463 "sequence_count": 2048, 00:18:18.463 "buf_count": 2048 00:18:18.463 } 00:18:18.463 } 00:18:18.463 ] 00:18:18.463 }, 00:18:18.463 { 00:18:18.463 "subsystem": "bdev", 00:18:18.463 "config": [ 00:18:18.463 { 00:18:18.463 "method": "bdev_set_options", 00:18:18.463 "params": { 00:18:18.463 "bdev_io_pool_size": 65535, 00:18:18.463 "bdev_io_cache_size": 256, 00:18:18.463 "bdev_auto_examine": true, 00:18:18.464 "iobuf_small_cache_size": 128, 00:18:18.464 "iobuf_large_cache_size": 16 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_raid_set_options", 00:18:18.464 "params": { 00:18:18.464 "process_window_size_kb": 1024 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_iscsi_set_options", 00:18:18.464 "params": { 00:18:18.464 "timeout_sec": 30 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_nvme_set_options", 00:18:18.464 "params": { 00:18:18.464 "action_on_timeout": "none", 00:18:18.464 "timeout_us": 0, 00:18:18.464 "timeout_admin_us": 0, 00:18:18.464 "keep_alive_timeout_ms": 10000, 00:18:18.464 "arbitration_burst": 0, 00:18:18.464 "low_priority_weight": 0, 00:18:18.464 "medium_priority_weight": 0, 00:18:18.464 "high_priority_weight": 0, 00:18:18.464 "nvme_adminq_poll_period_us": 10000, 00:18:18.464 "nvme_ioq_poll_period_us": 0, 00:18:18.464 "io_queue_requests": 0, 00:18:18.464 "delay_cmd_submit": true, 00:18:18.464 "transport_retry_count": 4, 00:18:18.464 "bdev_retry_count": 3, 00:18:18.464 "transport_ack_timeout": 0, 00:18:18.464 "ctrlr_loss_timeout_sec": 0, 00:18:18.464 "reconnect_delay_sec": 0, 00:18:18.464 "fast_io_fail_timeout_sec": 0, 00:18:18.464 "disable_auto_failback": false, 00:18:18.464 "generate_uuids": false, 00:18:18.464 "transport_tos": 0, 00:18:18.464 "nvme_error_stat": false, 00:18:18.464 "rdma_srq_size": 0, 00:18:18.464 "io_path_stat": false, 00:18:18.464 "allow_accel_sequence": false, 00:18:18.464 "rdma_max_cq_size": 0, 00:18:18.464 "rdma_cm_event_timeout_ms": 0, 00:18:18.464 "dhchap_digests": [ 00:18:18.464 "sha256", 00:18:18.464 "sha384", 00:18:18.464 "sha512" 00:18:18.464 ], 00:18:18.464 "dhchap_dhgroups": [ 00:18:18.464 "null", 00:18:18.464 "ffdhe2048", 00:18:18.464 "ffdhe3072", 00:18:18.464 "ffdhe4096", 00:18:18.464 "ffdhe6144", 00:18:18.464 "ffdhe8192" 00:18:18.464 ] 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_nvme_set_hotplug", 00:18:18.464 "params": { 00:18:18.464 "period_us": 100000, 00:18:18.464 "enable": false 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_malloc_create", 00:18:18.464 "params": { 00:18:18.464 "name": "malloc0", 00:18:18.464 "num_blocks": 8192, 00:18:18.464 "block_size": 4096, 00:18:18.464 "physical_block_size": 4096, 00:18:18.464 "uuid": "13b1e9f8-63b1-46d7-a3c7-10a7f015aeea", 00:18:18.464 "optimal_io_boundary": 0 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "bdev_wait_for_examine" 00:18:18.464 } 00:18:18.464 ] 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "subsystem": "nbd", 00:18:18.464 "config": [] 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "subsystem": "scheduler", 00:18:18.464 "config": [ 00:18:18.464 { 00:18:18.464 "method": "framework_set_scheduler", 00:18:18.464 "params": { 00:18:18.464 "name": "static" 00:18:18.464 } 00:18:18.464 } 00:18:18.464 ] 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "subsystem": "nvmf", 00:18:18.464 "config": [ 00:18:18.464 { 00:18:18.464 "method": "nvmf_set_config", 00:18:18.464 "params": { 00:18:18.464 "discovery_filter": "match_any", 00:18:18.464 "admin_cmd_passthru": { 00:18:18.464 "identify_ctrlr": false 00:18:18.464 } 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_set_max_subsystems", 00:18:18.464 "params": { 00:18:18.464 "max_subsystems": 1024 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_set_crdt", 00:18:18.464 "params": { 00:18:18.464 "crdt1": 0, 00:18:18.464 "crdt2": 0, 00:18:18.464 "crdt3": 0 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_create_transport", 00:18:18.464 "params": { 00:18:18.464 "trtype": "TCP", 00:18:18.464 "max_queue_depth": 128, 00:18:18.464 "max_io_qpairs_per_ctrlr": 127, 00:18:18.464 "in_capsule_data_size": 4096, 00:18:18.464 "max_io_size": 131072, 00:18:18.464 "io_unit_size": 131072, 00:18:18.464 "max_aq_depth": 128, 00:18:18.464 "num_shared_buffers": 511, 00:18:18.464 "buf_cache_size": 4294967295, 00:18:18.464 "dif_insert_or_strip": false, 00:18:18.464 "zcopy": false, 00:18:18.464 "c2h_success": false, 00:18:18.464 "sock_priority": 0, 00:18:18.464 "abort_timeout_sec": 1, 00:18:18.464 "ack_timeout": 0, 00:18:18.464 "data_wr_pool_size": 0 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_create_subsystem", 00:18:18.464 "params": { 00:18:18.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.464 "allow_any_host": false, 00:18:18.464 "serial_number": "00000000000000000000", 00:18:18.464 "model_n 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:18.464 umber": "SPDK bdev Controller", 00:18:18.464 "max_namespaces": 32, 00:18:18.464 "min_cntlid": 1, 00:18:18.464 "max_cntlid": 65519, 00:18:18.464 "ana_reporting": false 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_subsystem_add_host", 00:18:18.464 "params": { 00:18:18.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.464 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.464 "psk": "key0" 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_subsystem_add_ns", 00:18:18.464 "params": { 00:18:18.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.464 "namespace": { 00:18:18.464 "nsid": 1, 00:18:18.464 "bdev_name": "malloc0", 00:18:18.464 "nguid": "13B1E9F863B146D7A3C710A7F015AEEA", 00:18:18.464 "uuid": "13b1e9f8-63b1-46d7-a3c7-10a7f015aeea", 00:18:18.464 "no_auto_visible": false 00:18:18.464 } 00:18:18.464 } 00:18:18.464 }, 00:18:18.464 { 00:18:18.464 "method": "nvmf_subsystem_add_listener", 00:18:18.464 "params": { 00:18:18.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.464 "listen_address": { 00:18:18.464 "trtype": "TCP", 00:18:18.464 "adrfam": "IPv4", 00:18:18.464 "traddr": "10.0.0.2", 00:18:18.464 "trsvcid": "4420" 00:18:18.464 }, 00:18:18.464 "secure_channel": true 00:18:18.464 } 00:18:18.464 } 00:18:18.464 ] 00:18:18.464 } 00:18:18.464 ] 00:18:18.464 }' 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3602805 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3602805 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3602805 ']' 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.464 00:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.464 [2024-05-15 00:00:18.984333] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:18.464 [2024-05-15 00:00:18.984388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.464 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.721 [2024-05-15 00:00:19.057743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.721 [2024-05-15 00:00:19.129831] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.721 [2024-05-15 00:00:19.129872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.722 [2024-05-15 00:00:19.129882] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.722 [2024-05-15 00:00:19.129891] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.722 [2024-05-15 00:00:19.129899] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.722 [2024-05-15 00:00:19.129958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.978 [2024-05-15 00:00:19.333411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.978 [2024-05-15 00:00:19.365409] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:18.978 [2024-05-15 00:00:19.365461] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.978 [2024-05-15 00:00:19.373347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3603026 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3603026 /var/tmp/bdevperf.sock 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3603026 ']' 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:19.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:19.271 00:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:19.271 "subsystems": [ 00:18:19.271 { 00:18:19.271 "subsystem": "keyring", 00:18:19.271 "config": [ 00:18:19.271 { 00:18:19.271 "method": "keyring_file_add_key", 00:18:19.271 "params": { 00:18:19.271 "name": "key0", 00:18:19.271 "path": "/tmp/tmp.UD401VpRKp" 00:18:19.271 } 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "subsystem": "iobuf", 00:18:19.271 "config": [ 00:18:19.271 { 00:18:19.271 "method": "iobuf_set_options", 00:18:19.271 "params": { 00:18:19.271 "small_pool_count": 8192, 00:18:19.271 "large_pool_count": 1024, 00:18:19.271 "small_bufsize": 8192, 00:18:19.271 "large_bufsize": 135168 00:18:19.271 } 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "subsystem": "sock", 00:18:19.271 "config": [ 00:18:19.271 { 00:18:19.271 "method": "sock_impl_set_options", 00:18:19.271 "params": { 00:18:19.271 "impl_name": "posix", 00:18:19.271 "recv_buf_size": 2097152, 00:18:19.271 "send_buf_size": 2097152, 00:18:19.271 "enable_recv_pipe": true, 00:18:19.271 "enable_quickack": false, 00:18:19.271 "enable_placement_id": 0, 00:18:19.271 "enable_zerocopy_send_server": true, 00:18:19.271 "enable_zerocopy_send_client": false, 00:18:19.271 "zerocopy_threshold": 0, 00:18:19.271 "tls_version": 0, 00:18:19.271 "enable_ktls": false 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "sock_impl_set_options", 00:18:19.271 "params": { 00:18:19.271 "impl_name": "ssl", 00:18:19.271 "recv_buf_size": 4096, 00:18:19.271 "send_buf_size": 4096, 00:18:19.271 "enable_recv_pipe": true, 00:18:19.271 "enable_quickack": false, 00:18:19.271 "enable_placement_id": 0, 00:18:19.271 "enable_zerocopy_send_server": true, 00:18:19.271 "enable_zerocopy_send_client": false, 00:18:19.271 "zerocopy_threshold": 0, 00:18:19.271 "tls_version": 0, 00:18:19.271 "enable_ktls": false 00:18:19.271 } 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "subsystem": "vmd", 00:18:19.271 "config": [] 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "subsystem": "accel", 00:18:19.271 "config": [ 00:18:19.271 { 00:18:19.271 "method": "accel_set_options", 00:18:19.271 "params": { 00:18:19.271 "small_cache_size": 128, 00:18:19.271 "large_cache_size": 16, 00:18:19.271 "task_count": 2048, 00:18:19.271 "sequence_count": 2048, 00:18:19.271 "buf_count": 2048 00:18:19.271 } 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "subsystem": "bdev", 00:18:19.271 "config": [ 00:18:19.271 { 00:18:19.271 "method": "bdev_set_options", 00:18:19.271 "params": { 00:18:19.271 "bdev_io_pool_size": 65535, 00:18:19.271 "bdev_io_cache_size": 256, 00:18:19.271 "bdev_auto_examine": true, 00:18:19.271 "iobuf_small_cache_size": 128, 00:18:19.271 "iobuf_large_cache_size": 16 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_raid_set_options", 00:18:19.271 "params": { 00:18:19.271 "process_window_size_kb": 1024 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_iscsi_set_options", 00:18:19.271 "params": { 00:18:19.271 "timeout_sec": 30 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_nvme_set_options", 00:18:19.271 "params": { 00:18:19.271 "action_on_timeout": "none", 00:18:19.271 "timeout_us": 0, 00:18:19.271 "timeout_admin_us": 0, 00:18:19.271 "keep_alive_timeout_ms": 10000, 00:18:19.271 "arbitration_burst": 0, 00:18:19.271 "low_priority_weight": 0, 00:18:19.271 "medium_priority_weight": 0, 00:18:19.271 "high_priority_weight": 0, 00:18:19.271 "nvme_adminq_poll_period_us": 10000, 00:18:19.271 "nvme_ioq_poll_period_us": 0, 00:18:19.271 "io_queue_requests": 512, 00:18:19.271 "delay_cmd_submit": true, 00:18:19.271 "transport_retry_count": 4, 00:18:19.271 "bdev_retry_count": 3, 00:18:19.271 "transport_ack_timeout": 0, 00:18:19.271 "ctrlr_loss_timeout_sec": 0, 00:18:19.271 "reconnect_delay_sec": 0, 00:18:19.271 "fast_io_fail_timeout_sec": 0, 00:18:19.271 "disable_auto_failback": false, 00:18:19.271 "generate_uuids": false, 00:18:19.271 "transport_tos": 0, 00:18:19.271 "nvme_error_stat": false, 00:18:19.271 "rdma_srq_size": 0, 00:18:19.271 "io_path_stat": false, 00:18:19.271 "allow_accel_sequence": false, 00:18:19.271 "rdma_max_cq_size": 0, 00:18:19.271 "rdma_cm_event_timeout_ms": 0, 00:18:19.271 "dhchap_digests": [ 00:18:19.271 "sha256", 00:18:19.271 "sha384", 00:18:19.271 "sha512" 00:18:19.271 ], 00:18:19.271 "dhchap_dhgroups": [ 00:18:19.271 "null", 00:18:19.271 "ffdhe2048", 00:18:19.271 "ffdhe3072", 00:18:19.271 "ffdhe4096", 00:18:19.271 "ffdhe6144", 00:18:19.271 "ffdhe8192" 00:18:19.271 ] 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_nvme_attach_controller", 00:18:19.271 "params": { 00:18:19.271 "name": "nvme0", 00:18:19.271 "trtype": "TCP", 00:18:19.271 "adrfam": "IPv4", 00:18:19.271 "traddr": "10.0.0.2", 00:18:19.271 "trsvcid": "4420", 00:18:19.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.271 "prchk_reftag": false, 00:18:19.271 "prchk_guard": false, 00:18:19.271 "ctrlr_loss_timeout_sec": 0, 00:18:19.271 "reconnect_delay_sec": 0, 00:18:19.271 "fast_io_fail_timeout_sec": 0, 00:18:19.271 "psk": "key0", 00:18:19.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.271 "hdgst": false, 00:18:19.271 "ddgst": false 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_nvme_set_hotplug", 00:18:19.271 "params": { 00:18:19.271 "period_us": 100000, 00:18:19.271 "enable": false 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_enable_histogram", 00:18:19.271 "params": { 00:18:19.271 "name": "nvme0n1", 00:18:19.271 "enable": true 00:18:19.271 } 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "method": "bdev_wait_for_examine" 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }, 00:18:19.271 { 00:18:19.272 "subsystem": "nbd", 00:18:19.272 "config": [] 00:18:19.272 } 00:18:19.272 ] 00:18:19.272 }' 00:18:19.272 00:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.546 [2024-05-15 00:00:19.864824] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:19.546 [2024-05-15 00:00:19.864881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603026 ] 00:18:19.546 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.546 [2024-05-15 00:00:19.935633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.546 [2024-05-15 00:00:20.006327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.804 [2024-05-15 00:00:20.150584] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.370 00:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.370 Running I/O for 1 seconds... 00:18:21.741 00:18:21.741 Latency(us) 00:18:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.741 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:21.741 Verification LBA range: start 0x0 length 0x2000 00:18:21.741 nvme0n1 : 1.06 1736.74 6.78 0.00 0.00 72072.08 7182.75 111568.49 00:18:21.741 =================================================================================================================== 00:18:21.741 Total : 1736.74 6.78 0.00 0.00 72072.08 7182.75 111568.49 00:18:21.741 0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:21.741 nvmf_trace.0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3603026 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3603026 ']' 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3603026 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3603026 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3603026' 00:18:21.741 killing process with pid 3603026 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3603026 00:18:21.741 Received shutdown signal, test time was about 1.000000 seconds 00:18:21.741 00:18:21.741 Latency(us) 00:18:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.741 =================================================================================================================== 00:18:21.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.741 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3603026 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.999 rmmod nvme_tcp 00:18:21.999 rmmod nvme_fabrics 00:18:21.999 rmmod nvme_keyring 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3602805 ']' 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3602805 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3602805 ']' 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3602805 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3602805 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3602805' 00:18:21.999 killing process with pid 3602805 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3602805 00:18:21.999 [2024-05-15 00:00:22.473097] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:21.999 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3602805 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.257 00:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.787 00:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.787 00:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8knEUWcR42 /tmp/tmp.MXx2bEKUA5 /tmp/tmp.UD401VpRKp 00:18:24.787 00:18:24.787 real 1m26.726s 00:18:24.787 user 2m7.866s 00:18:24.787 sys 0m34.298s 00:18:24.787 00:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.787 00:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.787 ************************************ 00:18:24.787 END TEST nvmf_tls 00:18:24.787 ************************************ 00:18:24.788 00:00:24 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:24.788 00:00:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:24.788 00:00:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:24.788 00:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.788 ************************************ 00:18:24.788 START TEST nvmf_fips 00:18:24.788 ************************************ 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:24.788 * Looking for test storage... 00:18:24.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:24.788 00:00:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:24.788 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:24.789 Error setting digest 00:18:24.789 005226BBFC7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:24.789 005226BBFC7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.789 00:00:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.354 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:31.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:31.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:31.355 Found net devices under 0000:af:00.0: cvl_0_0 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:31.355 Found net devices under 0000:af:00.1: cvl_0_1 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:18:31.355 00:18:31.355 --- 10.0.0.2 ping statistics --- 00:18:31.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.355 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:18:31.355 00:18:31.355 --- 10.0.0.1 ping statistics --- 00:18:31.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.355 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3607091 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3607091 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3607091 ']' 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.355 00:00:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.355 [2024-05-15 00:00:31.587219] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:31.355 [2024-05-15 00:00:31.587268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.355 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.355 [2024-05-15 00:00:31.660154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.355 [2024-05-15 00:00:31.731927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.355 [2024-05-15 00:00:31.731965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.355 [2024-05-15 00:00:31.731975] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.355 [2024-05-15 00:00:31.731983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.355 [2024-05-15 00:00:31.732006] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.355 [2024-05-15 00:00:31.732027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.921 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.178 [2024-05-15 00:00:32.567221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.179 [2024-05-15 00:00:32.583205] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:32.179 [2024-05-15 00:00:32.583248] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.179 [2024-05-15 00:00:32.583421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.179 [2024-05-15 00:00:32.611550] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:32.179 malloc0 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3607374 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3607374 /var/tmp/bdevperf.sock 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3607374 ']' 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.179 00:00:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:32.179 [2024-05-15 00:00:32.696557] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:18:32.179 [2024-05-15 00:00:32.696613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607374 ] 00:18:32.179 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.179 [2024-05-15 00:00:32.762107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.436 [2024-05-15 00:00:32.831622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.002 00:00:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.002 00:00:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:33.002 00:00:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:33.260 [2024-05-15 00:00:33.617280] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.260 [2024-05-15 00:00:33.617362] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.261 TLSTESTn1 00:18:33.261 00:00:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.261 Running I/O for 10 seconds... 00:18:45.459 00:18:45.459 Latency(us) 00:18:45.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.459 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.459 Verification LBA range: start 0x0 length 0x2000 00:18:45.459 TLSTESTn1 : 10.06 2097.73 8.19 0.00 0.00 60865.08 5531.24 116601.65 00:18:45.459 =================================================================================================================== 00:18:45.459 Total : 2097.73 8.19 0.00 0.00 60865.08 5531.24 116601.65 00:18:45.459 0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:45.459 nvmf_trace.0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3607374 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3607374 ']' 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3607374 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.459 00:00:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3607374 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3607374' 00:18:45.459 killing process with pid 3607374 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3607374 00:18:45.459 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.459 00:18:45.459 Latency(us) 00:18:45.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.459 =================================================================================================================== 00:18:45.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.459 [2024-05-15 00:00:44.013404] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3607374 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.459 rmmod nvme_tcp 00:18:45.459 rmmod nvme_fabrics 00:18:45.459 rmmod nvme_keyring 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3607091 ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3607091 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3607091 ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3607091 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3607091 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3607091' 00:18:45.459 killing process with pid 3607091 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3607091 00:18:45.459 [2024-05-15 00:00:44.334541] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.459 [2024-05-15 00:00:44.334578] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3607091 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.459 00:00:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.025 00:00:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.283 00:00:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:46.283 00:18:46.283 real 0m21.768s 00:18:46.283 user 0m21.979s 00:18:46.283 sys 0m10.636s 00:18:46.283 00:00:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.283 00:00:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.283 ************************************ 00:18:46.283 END TEST nvmf_fips 00:18:46.283 ************************************ 00:18:46.283 00:00:46 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:18:46.283 00:00:46 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:18:46.283 00:00:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:18:46.283 00:00:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:18:46.283 00:00:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.283 00:00:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:52.875 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:52.875 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:52.875 Found net devices under 0000:af:00.0: cvl_0_0 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:52.875 Found net devices under 0000:af:00.1: cvl_0_1 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:18:52.875 00:00:52 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:52.875 00:00:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:52.875 00:00:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.875 00:00:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 ************************************ 00:18:52.875 START TEST nvmf_perf_adq 00:18:52.875 ************************************ 00:18:52.875 00:00:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:52.875 * Looking for test storage... 00:18:52.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.875 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.876 00:00:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.441 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:59.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:59.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:59.442 Found net devices under 0000:af:00.0: cvl_0_0 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:59.442 Found net devices under 0000:af:00.1: cvl_0_1 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:59.442 00:00:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:00.007 00:01:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:02.539 00:01:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:07.800 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:07.800 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:07.800 Found net devices under 0000:af:00.0: cvl_0_0 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:07.800 Found net devices under 0000:af:00.1: cvl_0_1 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.800 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:19:07.801 00:19:07.801 --- 10.0.0.2 ping statistics --- 00:19:07.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.801 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:07.801 00:19:07.801 --- 10.0.0.1 ping statistics --- 00:19:07.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.801 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.801 00:01:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3617584 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3617584 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3617584 ']' 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:07.801 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.801 [2024-05-15 00:01:08.077429] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:07.801 [2024-05-15 00:01:08.077478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.801 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.801 [2024-05-15 00:01:08.154268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.801 [2024-05-15 00:01:08.230391] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.801 [2024-05-15 00:01:08.230430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.801 [2024-05-15 00:01:08.230440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.801 [2024-05-15 00:01:08.230449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.801 [2024-05-15 00:01:08.230456] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.801 [2024-05-15 00:01:08.230503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.801 [2024-05-15 00:01:08.230529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.801 [2024-05-15 00:01:08.230782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.801 [2024-05-15 00:01:08.230785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.365 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 [2024-05-15 00:01:09.076480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 Malloc1 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 [2024-05-15 00:01:09.126931] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:08.622 [2024-05-15 00:01:09.127204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3617872 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:08.622 00:01:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:08.622 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:11.143 "tick_rate": 2500000000, 00:19:11.143 "poll_groups": [ 00:19:11.143 { 00:19:11.143 "name": "nvmf_tgt_poll_group_000", 00:19:11.143 "admin_qpairs": 1, 00:19:11.143 "io_qpairs": 1, 00:19:11.143 "current_admin_qpairs": 1, 00:19:11.143 "current_io_qpairs": 1, 00:19:11.143 "pending_bdev_io": 0, 00:19:11.143 "completed_nvme_io": 18789, 00:19:11.143 "transports": [ 00:19:11.143 { 00:19:11.143 "trtype": "TCP" 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 }, 00:19:11.143 { 00:19:11.143 "name": "nvmf_tgt_poll_group_001", 00:19:11.143 "admin_qpairs": 0, 00:19:11.143 "io_qpairs": 1, 00:19:11.143 "current_admin_qpairs": 0, 00:19:11.143 "current_io_qpairs": 1, 00:19:11.143 "pending_bdev_io": 0, 00:19:11.143 "completed_nvme_io": 18623, 00:19:11.143 "transports": [ 00:19:11.143 { 00:19:11.143 "trtype": "TCP" 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 }, 00:19:11.143 { 00:19:11.143 "name": "nvmf_tgt_poll_group_002", 00:19:11.143 "admin_qpairs": 0, 00:19:11.143 "io_qpairs": 1, 00:19:11.143 "current_admin_qpairs": 0, 00:19:11.143 "current_io_qpairs": 1, 00:19:11.143 "pending_bdev_io": 0, 00:19:11.143 "completed_nvme_io": 18851, 00:19:11.143 "transports": [ 00:19:11.143 { 00:19:11.143 "trtype": "TCP" 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 }, 00:19:11.143 { 00:19:11.143 "name": "nvmf_tgt_poll_group_003", 00:19:11.143 "admin_qpairs": 0, 00:19:11.143 "io_qpairs": 1, 00:19:11.143 "current_admin_qpairs": 0, 00:19:11.143 "current_io_qpairs": 1, 00:19:11.143 "pending_bdev_io": 0, 00:19:11.143 "completed_nvme_io": 18589, 00:19:11.143 "transports": [ 00:19:11.143 { 00:19:11.143 "trtype": "TCP" 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 } 00:19:11.143 ] 00:19:11.143 }' 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:11.143 00:01:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3617872 00:19:19.265 Initializing NVMe Controllers 00:19:19.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:19.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:19.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:19.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:19.265 Initialization complete. Launching workers. 00:19:19.265 ======================================================== 00:19:19.265 Latency(us) 00:19:19.265 Device Information : IOPS MiB/s Average min max 00:19:19.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10042.40 39.23 6373.97 2638.32 9811.30 00:19:19.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9954.60 38.89 6430.49 2375.50 11945.67 00:19:19.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9955.10 38.89 6429.60 1747.30 11746.15 00:19:19.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9890.60 38.64 6471.01 2284.01 11953.16 00:19:19.266 ======================================================== 00:19:19.266 Total : 39842.70 155.64 6426.08 1747.30 11953.16 00:19:19.266 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.266 rmmod nvme_tcp 00:19:19.266 rmmod nvme_fabrics 00:19:19.266 rmmod nvme_keyring 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3617584 ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3617584 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3617584 ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3617584 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3617584 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3617584' 00:19:19.266 killing process with pid 3617584 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3617584 00:19:19.266 [2024-05-15 00:01:19.423827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3617584 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.266 00:01:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.169 00:01:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.169 00:01:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:21.169 00:01:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:22.544 00:01:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:25.073 00:01:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:30.335 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.336 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.336 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.336 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.336 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:19:30.336 00:19:30.336 --- 10.0.0.2 ping statistics --- 00:19:30.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.336 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:30.336 00:19:30.336 --- 10.0.0.1 ping statistics --- 00:19:30.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.336 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:30.336 net.core.busy_poll = 1 00:19:30.336 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:30.337 net.core.busy_read = 1 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3621786 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3621786 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3621786 ']' 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:30.337 00:01:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.337 [2024-05-15 00:01:30.826184] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:30.337 [2024-05-15 00:01:30.826255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.337 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.337 [2024-05-15 00:01:30.900325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.594 [2024-05-15 00:01:30.976408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.594 [2024-05-15 00:01:30.976448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.594 [2024-05-15 00:01:30.976459] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.594 [2024-05-15 00:01:30.976468] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.594 [2024-05-15 00:01:30.976475] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.594 [2024-05-15 00:01:30.976535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.595 [2024-05-15 00:01:30.976655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.595 [2024-05-15 00:01:30.976717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.595 [2024-05-15 00:01:30.976719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.161 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 [2024-05-15 00:01:31.808948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 Malloc1 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.419 [2024-05-15 00:01:31.855456] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:31.419 [2024-05-15 00:01:31.855729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3621994 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:31.419 00:01:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:31.419 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.324 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:33.324 00:01:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.324 00:01:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.324 00:01:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.324 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:33.324 "tick_rate": 2500000000, 00:19:33.324 "poll_groups": [ 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_000", 00:19:33.324 "admin_qpairs": 1, 00:19:33.324 "io_qpairs": 2, 00:19:33.324 "current_admin_qpairs": 1, 00:19:33.324 "current_io_qpairs": 2, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 27720, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_001", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 2, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 2, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 27687, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_002", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 0, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 0, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 0, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_003", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 0, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 0, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 0, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }' 00:19:33.325 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:33.325 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:33.582 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:33.582 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:33.582 00:01:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3621994 00:19:41.681 Initializing NVMe Controllers 00:19:41.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:41.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:41.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:41.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:41.681 Initialization complete. Launching workers. 00:19:41.681 ======================================================== 00:19:41.681 Latency(us) 00:19:41.681 Device Information : IOPS MiB/s Average min max 00:19:41.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7342.55 28.68 8717.28 1787.03 52005.98 00:19:41.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7438.25 29.06 8613.24 1703.20 53595.18 00:19:41.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7413.65 28.96 8633.47 1736.78 54328.61 00:19:41.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7166.95 28.00 8931.09 1651.97 54724.80 00:19:41.681 ======================================================== 00:19:41.681 Total : 29361.40 114.69 8721.95 1651.97 54724.80 00:19:41.681 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.681 rmmod nvme_tcp 00:19:41.681 rmmod nvme_fabrics 00:19:41.681 rmmod nvme_keyring 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3621786 ']' 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3621786 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3621786 ']' 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3621786 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621786 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621786' 00:19:41.681 killing process with pid 3621786 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3621786 00:19:41.681 [2024-05-15 00:01:42.188178] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:41.681 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3621786 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.940 00:01:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.247 00:01:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.247 00:01:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:45.247 00:19:45.247 real 0m52.521s 00:19:45.247 user 2m46.241s 00:19:45.247 sys 0m13.862s 00:19:45.247 00:01:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:45.247 00:01:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.247 ************************************ 00:19:45.247 END TEST nvmf_perf_adq 00:19:45.247 ************************************ 00:19:45.247 00:01:45 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:45.247 00:01:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:45.247 00:01:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:45.247 00:01:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.247 ************************************ 00:19:45.247 START TEST nvmf_shutdown 00:19:45.247 ************************************ 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:45.247 * Looking for test storage... 00:19:45.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:45.247 ************************************ 00:19:45.247 START TEST nvmf_shutdown_tc1 00:19:45.247 ************************************ 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.247 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.248 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.248 00:01:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:51.807 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:51.807 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:51.807 Found net devices under 0000:af:00.0: cvl_0_0 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:51.807 Found net devices under 0000:af:00.1: cvl_0_1 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.807 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.808 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.808 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.808 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.808 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.808 00:01:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:19:51.808 00:19:51.808 --- 10.0.0.2 ping statistics --- 00:19:51.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.808 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:19:51.808 00:19:51.808 --- 10.0.0.1 ping statistics --- 00:19:51.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.808 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3627654 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3627654 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3627654 ']' 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:51.808 00:01:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:51.808 [2024-05-15 00:01:52.282782] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:51.808 [2024-05-15 00:01:52.282827] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.808 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.808 [2024-05-15 00:01:52.355798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.066 [2024-05-15 00:01:52.430047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.066 [2024-05-15 00:01:52.430081] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.066 [2024-05-15 00:01:52.430090] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.066 [2024-05-15 00:01:52.430098] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.066 [2024-05-15 00:01:52.430105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.066 [2024-05-15 00:01:52.430212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.066 [2024-05-15 00:01:52.430330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.066 [2024-05-15 00:01:52.430438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.066 [2024-05-15 00:01:52.430439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 [2024-05-15 00:01:53.131918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.642 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 Malloc1 00:19:52.900 [2024-05-15 00:01:53.246681] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:52.900 [2024-05-15 00:01:53.246944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.900 Malloc2 00:19:52.900 Malloc3 00:19:52.900 Malloc4 00:19:52.900 Malloc5 00:19:52.900 Malloc6 00:19:52.900 Malloc7 00:19:53.158 Malloc8 00:19:53.158 Malloc9 00:19:53.158 Malloc10 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3627964 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3627964 /var/tmp/bdevperf.sock 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3627964 ']' 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.158 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.158 { 00:19:53.158 "params": { 00:19:53.158 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 [2024-05-15 00:01:53.729248] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:53.159 [2024-05-15 00:01:53.729301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.159 { 00:19:53.159 "params": { 00:19:53.159 "name": "Nvme$subsystem", 00:19:53.159 "trtype": "$TEST_TRANSPORT", 00:19:53.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "$NVMF_PORT", 00:19:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.159 "hdgst": ${hdgst:-false}, 00:19:53.159 "ddgst": ${ddgst:-false} 00:19:53.159 }, 00:19:53.159 "method": "bdev_nvme_attach_controller" 00:19:53.159 } 00:19:53.159 EOF 00:19:53.159 )") 00:19:53.159 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.417 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.417 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.417 { 00:19:53.417 "params": { 00:19:53.417 "name": "Nvme$subsystem", 00:19:53.417 "trtype": "$TEST_TRANSPORT", 00:19:53.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.417 "adrfam": "ipv4", 00:19:53.417 "trsvcid": "$NVMF_PORT", 00:19:53.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.417 "hdgst": ${hdgst:-false}, 00:19:53.417 "ddgst": ${ddgst:-false} 00:19:53.417 }, 00:19:53.417 "method": "bdev_nvme_attach_controller" 00:19:53.417 } 00:19:53.417 EOF 00:19:53.418 )") 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.418 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.418 { 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme$subsystem", 00:19:53.418 "trtype": "$TEST_TRANSPORT", 00:19:53.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "$NVMF_PORT", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.418 "hdgst": ${hdgst:-false}, 00:19:53.418 "ddgst": ${ddgst:-false} 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 } 00:19:53.418 EOF 00:19:53.418 )") 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:53.418 00:01:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme1", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme2", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme3", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme4", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme5", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme6", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme7", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme8", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme9", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 },{ 00:19:53.418 "params": { 00:19:53.418 "name": "Nvme10", 00:19:53.418 "trtype": "tcp", 00:19:53.418 "traddr": "10.0.0.2", 00:19:53.418 "adrfam": "ipv4", 00:19:53.418 "trsvcid": "4420", 00:19:53.418 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.418 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.418 "hdgst": false, 00:19:53.418 "ddgst": false 00:19:53.418 }, 00:19:53.418 "method": "bdev_nvme_attach_controller" 00:19:53.418 }' 00:19:53.418 [2024-05-15 00:01:53.802352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.418 [2024-05-15 00:01:53.870631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3627964 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:54.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3627964 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:54.790 00:01:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3627654 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.722 "ddgst": ${ddgst:-false} 00:19:55.722 }, 00:19:55.722 "method": "bdev_nvme_attach_controller" 00:19:55.722 } 00:19:55.722 EOF 00:19:55.722 )") 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.722 [2024-05-15 00:01:56.289118] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:19:55.722 [2024-05-15 00:01:56.289173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628286 ] 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.722 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.722 { 00:19:55.722 "params": { 00:19:55.722 "name": "Nvme$subsystem", 00:19:55.722 "trtype": "$TEST_TRANSPORT", 00:19:55.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.722 "adrfam": "ipv4", 00:19:55.722 "trsvcid": "$NVMF_PORT", 00:19:55.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.722 "hdgst": ${hdgst:-false}, 00:19:55.723 "ddgst": ${ddgst:-false} 00:19:55.723 }, 00:19:55.723 "method": "bdev_nvme_attach_controller" 00:19:55.723 } 00:19:55.723 EOF 00:19:55.723 )") 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.723 { 00:19:55.723 "params": { 00:19:55.723 "name": "Nvme$subsystem", 00:19:55.723 "trtype": "$TEST_TRANSPORT", 00:19:55.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.723 "adrfam": "ipv4", 00:19:55.723 "trsvcid": "$NVMF_PORT", 00:19:55.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.723 "hdgst": ${hdgst:-false}, 00:19:55.723 "ddgst": ${ddgst:-false} 00:19:55.723 }, 00:19:55.723 "method": "bdev_nvme_attach_controller" 00:19:55.723 } 00:19:55.723 EOF 00:19:55.723 )") 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.723 { 00:19:55.723 "params": { 00:19:55.723 "name": "Nvme$subsystem", 00:19:55.723 "trtype": "$TEST_TRANSPORT", 00:19:55.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.723 "adrfam": "ipv4", 00:19:55.723 "trsvcid": "$NVMF_PORT", 00:19:55.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.723 "hdgst": ${hdgst:-false}, 00:19:55.723 "ddgst": ${ddgst:-false} 00:19:55.723 }, 00:19:55.723 "method": "bdev_nvme_attach_controller" 00:19:55.723 } 00:19:55.723 EOF 00:19:55.723 )") 00:19:55.723 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.980 { 00:19:55.980 "params": { 00:19:55.980 "name": "Nvme$subsystem", 00:19:55.980 "trtype": "$TEST_TRANSPORT", 00:19:55.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.980 "adrfam": "ipv4", 00:19:55.980 "trsvcid": "$NVMF_PORT", 00:19:55.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.980 "hdgst": ${hdgst:-false}, 00:19:55.980 "ddgst": ${ddgst:-false} 00:19:55.980 }, 00:19:55.980 "method": "bdev_nvme_attach_controller" 00:19:55.980 } 00:19:55.980 EOF 00:19:55.980 )") 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:55.980 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:55.980 00:01:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:55.980 "params": { 00:19:55.980 "name": "Nvme1", 00:19:55.980 "trtype": "tcp", 00:19:55.980 "traddr": "10.0.0.2", 00:19:55.980 "adrfam": "ipv4", 00:19:55.980 "trsvcid": "4420", 00:19:55.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.980 "hdgst": false, 00:19:55.980 "ddgst": false 00:19:55.980 }, 00:19:55.980 "method": "bdev_nvme_attach_controller" 00:19:55.980 },{ 00:19:55.980 "params": { 00:19:55.980 "name": "Nvme2", 00:19:55.980 "trtype": "tcp", 00:19:55.980 "traddr": "10.0.0.2", 00:19:55.980 "adrfam": "ipv4", 00:19:55.980 "trsvcid": "4420", 00:19:55.980 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:55.980 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.980 "hdgst": false, 00:19:55.980 "ddgst": false 00:19:55.980 }, 00:19:55.980 "method": "bdev_nvme_attach_controller" 00:19:55.980 },{ 00:19:55.980 "params": { 00:19:55.980 "name": "Nvme3", 00:19:55.980 "trtype": "tcp", 00:19:55.980 "traddr": "10.0.0.2", 00:19:55.980 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme4", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme5", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme6", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme7", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme8", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme9", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 },{ 00:19:55.981 "params": { 00:19:55.981 "name": "Nvme10", 00:19:55.981 "trtype": "tcp", 00:19:55.981 "traddr": "10.0.0.2", 00:19:55.981 "adrfam": "ipv4", 00:19:55.981 "trsvcid": "4420", 00:19:55.981 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:55.981 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:55.981 "hdgst": false, 00:19:55.981 "ddgst": false 00:19:55.981 }, 00:19:55.981 "method": "bdev_nvme_attach_controller" 00:19:55.981 }' 00:19:55.981 [2024-05-15 00:01:56.361010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.981 [2024-05-15 00:01:56.430621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.348 Running I/O for 1 seconds... 00:19:58.717 00:19:58.717 Latency(us) 00:19:58.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.717 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme1n1 : 1.10 291.41 18.21 0.00 0.00 217737.63 18769.51 210554.06 00:19:58.717 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme2n1 : 1.09 292.77 18.30 0.00 0.00 213283.96 17930.65 224814.69 00:19:58.717 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme3n1 : 1.09 292.59 18.29 0.00 0.00 210699.88 18245.22 208876.34 00:19:58.717 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme4n1 : 1.11 230.02 14.38 0.00 0.00 264720.18 20656.95 246625.08 00:19:58.717 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme5n1 : 1.11 288.60 18.04 0.00 0.00 207812.53 19084.08 213909.50 00:19:58.717 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme6n1 : 1.12 284.52 17.78 0.00 0.00 208080.08 20237.52 209715.20 00:19:58.717 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme7n1 : 1.20 266.38 16.65 0.00 0.00 212653.34 19188.94 210554.06 00:19:58.717 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme8n1 : 1.11 288.40 18.03 0.00 0.00 198769.05 20342.37 212231.78 00:19:58.717 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme9n1 : 1.13 284.35 17.77 0.00 0.00 198970.08 19293.80 207198.62 00:19:58.717 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.717 Verification LBA range: start 0x0 length 0x400 00:19:58.717 Nvme10n1 : 1.18 325.42 20.34 0.00 0.00 172669.27 9437.18 214748.36 00:19:58.717 =================================================================================================================== 00:19:58.718 Total : 2844.47 177.78 0.00 0.00 208698.58 9437.18 246625.08 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.718 rmmod nvme_tcp 00:19:58.718 rmmod nvme_fabrics 00:19:58.718 rmmod nvme_keyring 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3627654 ']' 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3627654 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3627654 ']' 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3627654 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.718 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3627654 00:19:58.974 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:58.974 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:58.974 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3627654' 00:19:58.974 killing process with pid 3627654 00:19:58.974 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3627654 00:19:58.974 [2024-05-15 00:01:59.338815] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:58.974 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3627654 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.232 00:01:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.757 00:20:01.757 real 0m16.044s 00:20:01.757 user 0m34.452s 00:20:01.757 sys 0m6.613s 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:01.757 ************************************ 00:20:01.757 END TEST nvmf_shutdown_tc1 00:20:01.757 ************************************ 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:01.757 ************************************ 00:20:01.757 START TEST nvmf_shutdown_tc2 00:20:01.757 ************************************ 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:01.757 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:01.757 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:01.757 Found net devices under 0000:af:00.0: cvl_0_0 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:01.757 Found net devices under 0000:af:00.1: cvl_0_1 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.757 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.758 00:02:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:20:01.758 00:20:01.758 --- 10.0.0.2 ping statistics --- 00:20:01.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.758 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:01.758 00:20:01.758 --- 10.0.0.1 ping statistics --- 00:20:01.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.758 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3629446 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3629446 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3629446 ']' 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:01.758 00:02:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.013 [2024-05-15 00:02:02.369664] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:02.013 [2024-05-15 00:02:02.369714] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.013 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.013 [2024-05-15 00:02:02.442819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.013 [2024-05-15 00:02:02.517202] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.013 [2024-05-15 00:02:02.517241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.013 [2024-05-15 00:02:02.517251] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.013 [2024-05-15 00:02:02.517260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.013 [2024-05-15 00:02:02.517268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.013 [2024-05-15 00:02:02.517372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.013 [2024-05-15 00:02:02.517454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.013 [2024-05-15 00:02:02.517564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.013 [2024-05-15 00:02:02.517565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.582 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:02.582 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:02.582 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.582 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.582 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.840 [2024-05-15 00:02:03.202934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.840 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.840 Malloc1 00:20:02.840 [2024-05-15 00:02:03.313674] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:02.840 [2024-05-15 00:02:03.313939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.840 Malloc2 00:20:02.840 Malloc3 00:20:02.840 Malloc4 00:20:03.097 Malloc5 00:20:03.097 Malloc6 00:20:03.097 Malloc7 00:20:03.097 Malloc8 00:20:03.097 Malloc9 00:20:03.354 Malloc10 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3629767 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3629767 /var/tmp/bdevperf.sock 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3629767 ']' 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.354 { 00:20:03.354 "params": { 00:20:03.354 "name": "Nvme$subsystem", 00:20:03.354 "trtype": "$TEST_TRANSPORT", 00:20:03.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.354 "adrfam": "ipv4", 00:20:03.354 "trsvcid": "$NVMF_PORT", 00:20:03.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.354 "hdgst": ${hdgst:-false}, 00:20:03.354 "ddgst": ${ddgst:-false} 00:20:03.354 }, 00:20:03.354 "method": "bdev_nvme_attach_controller" 00:20:03.354 } 00:20:03.354 EOF 00:20:03.354 )") 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.354 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 [2024-05-15 00:02:03.807255] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:03.355 [2024-05-15 00:02:03.807303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629767 ] 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.355 { 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme$subsystem", 00:20:03.355 "trtype": "$TEST_TRANSPORT", 00:20:03.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "$NVMF_PORT", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.355 "hdgst": ${hdgst:-false}, 00:20:03.355 "ddgst": ${ddgst:-false} 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 } 00:20:03.355 EOF 00:20:03.355 )") 00:20:03.355 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.355 00:02:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme1", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme2", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme3", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme4", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme5", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme6", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme7", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme8", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme9", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 },{ 00:20:03.355 "params": { 00:20:03.355 "name": "Nvme10", 00:20:03.355 "trtype": "tcp", 00:20:03.355 "traddr": "10.0.0.2", 00:20:03.355 "adrfam": "ipv4", 00:20:03.355 "trsvcid": "4420", 00:20:03.355 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.355 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.355 "hdgst": false, 00:20:03.355 "ddgst": false 00:20:03.355 }, 00:20:03.355 "method": "bdev_nvme_attach_controller" 00:20:03.355 }' 00:20:03.355 [2024-05-15 00:02:03.879600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.612 [2024-05-15 00:02:03.955061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.977 Running I/O for 10 seconds... 00:20:04.977 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.977 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:04.977 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:04.977 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.977 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:05.244 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3629767 00:20:05.538 00:02:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3629767 ']' 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3629767 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3629767 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3629767' 00:20:05.538 killing process with pid 3629767 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3629767 00:20:05.538 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3629767 00:20:05.538 Received shutdown signal, test time was about 0.618607 seconds 00:20:05.538 00:20:05.538 Latency(us) 00:20:05.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme1n1 : 0.59 324.57 20.29 0.00 0.00 194145.21 18350.08 187904.82 00:20:05.538 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme2n1 : 0.61 312.52 19.53 0.00 0.00 196738.53 21600.67 189582.54 00:20:05.538 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme3n1 : 0.62 311.73 19.48 0.00 0.00 192312.66 20447.23 229847.86 00:20:05.538 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme4n1 : 0.62 310.69 19.42 0.00 0.00 187845.29 19608.37 204682.04 00:20:05.538 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme5n1 : 0.61 209.83 13.11 0.00 0.00 270846.36 39216.74 261724.57 00:20:05.538 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme6n1 : 0.59 324.10 20.26 0.00 0.00 167829.78 21915.24 197132.29 00:20:05.538 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme7n1 : 0.60 213.67 13.35 0.00 0.00 248040.65 33554.43 208876.34 00:20:05.538 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme8n1 : 0.60 213.41 13.34 0.00 0.00 242900.17 28730.98 228170.14 00:20:05.538 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme9n1 : 0.61 315.76 19.74 0.00 0.00 159811.17 20132.66 192099.12 00:20:05.538 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.538 Verification LBA range: start 0x0 length 0x400 00:20:05.538 Nvme10n1 : 0.58 221.52 13.84 0.00 0.00 217286.25 27262.98 189582.54 00:20:05.538 =================================================================================================================== 00:20:05.538 Total : 2757.81 172.36 0.00 0.00 202084.42 18350.08 261724.57 00:20:05.796 00:02:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3629446 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.167 rmmod nvme_tcp 00:20:07.167 rmmod nvme_fabrics 00:20:07.167 rmmod nvme_keyring 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3629446 ']' 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3629446 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3629446 ']' 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3629446 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3629446 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3629446' 00:20:07.167 killing process with pid 3629446 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3629446 00:20:07.167 [2024-05-15 00:02:07.485628] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:07.167 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3629446 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.426 00:02:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.959 00:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.959 00:20:09.959 real 0m8.043s 00:20:09.959 user 0m23.767s 00:20:09.959 sys 0m1.537s 00:20:09.959 00:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:09.959 00:02:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:09.959 ************************************ 00:20:09.959 END TEST nvmf_shutdown_tc2 00:20:09.959 ************************************ 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:09.959 ************************************ 00:20:09.959 START TEST nvmf_shutdown_tc3 00:20:09.959 ************************************ 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.959 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:09.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:09.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:09.960 Found net devices under 0000:af:00.0: cvl_0_0 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:09.960 Found net devices under 0000:af:00.1: cvl_0_1 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:20:09.960 00:20:09.960 --- 10.0.0.2 ping statistics --- 00:20:09.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.960 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:20:09.960 00:20:09.960 --- 10.0.0.1 ping statistics --- 00:20:09.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.960 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3630960 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3630960 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3630960 ']' 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:09.960 00:02:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.960 [2024-05-15 00:02:10.476799] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:09.960 [2024-05-15 00:02:10.476853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.960 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.219 [2024-05-15 00:02:10.551650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.219 [2024-05-15 00:02:10.628389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.220 [2024-05-15 00:02:10.628422] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.220 [2024-05-15 00:02:10.628432] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.220 [2024-05-15 00:02:10.628441] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.220 [2024-05-15 00:02:10.628448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.220 [2024-05-15 00:02:10.628491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.220 [2024-05-15 00:02:10.628575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.220 [2024-05-15 00:02:10.628685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.220 [2024-05-15 00:02:10.628686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.786 [2024-05-15 00:02:11.305841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.786 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.787 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.045 Malloc1 00:20:11.045 [2024-05-15 00:02:11.416516] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:11.045 [2024-05-15 00:02:11.416780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.045 Malloc2 00:20:11.045 Malloc3 00:20:11.045 Malloc4 00:20:11.045 Malloc5 00:20:11.045 Malloc6 00:20:11.304 Malloc7 00:20:11.304 Malloc8 00:20:11.304 Malloc9 00:20:11.304 Malloc10 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3631276 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3631276 /var/tmp/bdevperf.sock 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3631276 ']' 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.304 { 00:20:11.304 "params": { 00:20:11.304 "name": "Nvme$subsystem", 00:20:11.304 "trtype": "$TEST_TRANSPORT", 00:20:11.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.304 "adrfam": "ipv4", 00:20:11.304 "trsvcid": "$NVMF_PORT", 00:20:11.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.304 "hdgst": ${hdgst:-false}, 00:20:11.304 "ddgst": ${ddgst:-false} 00:20:11.304 }, 00:20:11.304 "method": "bdev_nvme_attach_controller" 00:20:11.304 } 00:20:11.304 EOF 00:20:11.304 )") 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.304 { 00:20:11.304 "params": { 00:20:11.304 "name": "Nvme$subsystem", 00:20:11.304 "trtype": "$TEST_TRANSPORT", 00:20:11.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.304 "adrfam": "ipv4", 00:20:11.304 "trsvcid": "$NVMF_PORT", 00:20:11.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.304 "hdgst": ${hdgst:-false}, 00:20:11.304 "ddgst": ${ddgst:-false} 00:20:11.304 }, 00:20:11.304 "method": "bdev_nvme_attach_controller" 00:20:11.304 } 00:20:11.304 EOF 00:20:11.304 )") 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.304 { 00:20:11.304 "params": { 00:20:11.304 "name": "Nvme$subsystem", 00:20:11.304 "trtype": "$TEST_TRANSPORT", 00:20:11.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.304 "adrfam": "ipv4", 00:20:11.304 "trsvcid": "$NVMF_PORT", 00:20:11.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.304 "hdgst": ${hdgst:-false}, 00:20:11.304 "ddgst": ${ddgst:-false} 00:20:11.304 }, 00:20:11.304 "method": "bdev_nvme_attach_controller" 00:20:11.304 } 00:20:11.304 EOF 00:20:11.304 )") 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.304 { 00:20:11.304 "params": { 00:20:11.304 "name": "Nvme$subsystem", 00:20:11.304 "trtype": "$TEST_TRANSPORT", 00:20:11.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.304 "adrfam": "ipv4", 00:20:11.304 "trsvcid": "$NVMF_PORT", 00:20:11.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.304 "hdgst": ${hdgst:-false}, 00:20:11.304 "ddgst": ${ddgst:-false} 00:20:11.304 }, 00:20:11.304 "method": "bdev_nvme_attach_controller" 00:20:11.304 } 00:20:11.304 EOF 00:20:11.304 )") 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.304 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.304 { 00:20:11.304 "params": { 00:20:11.304 "name": "Nvme$subsystem", 00:20:11.304 "trtype": "$TEST_TRANSPORT", 00:20:11.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.304 "adrfam": "ipv4", 00:20:11.304 "trsvcid": "$NVMF_PORT", 00:20:11.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.304 "hdgst": ${hdgst:-false}, 00:20:11.304 "ddgst": ${ddgst:-false} 00:20:11.304 }, 00:20:11.304 "method": "bdev_nvme_attach_controller" 00:20:11.304 } 00:20:11.304 EOF 00:20:11.304 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.565 { 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme$subsystem", 00:20:11.565 "trtype": "$TEST_TRANSPORT", 00:20:11.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "$NVMF_PORT", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.565 "hdgst": ${hdgst:-false}, 00:20:11.565 "ddgst": ${ddgst:-false} 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 } 00:20:11.565 EOF 00:20:11.565 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 [2024-05-15 00:02:11.905419] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:11.565 [2024-05-15 00:02:11.905470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631276 ] 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.565 { 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme$subsystem", 00:20:11.565 "trtype": "$TEST_TRANSPORT", 00:20:11.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "$NVMF_PORT", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.565 "hdgst": ${hdgst:-false}, 00:20:11.565 "ddgst": ${ddgst:-false} 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 } 00:20:11.565 EOF 00:20:11.565 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.565 { 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme$subsystem", 00:20:11.565 "trtype": "$TEST_TRANSPORT", 00:20:11.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "$NVMF_PORT", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.565 "hdgst": ${hdgst:-false}, 00:20:11.565 "ddgst": ${ddgst:-false} 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 } 00:20:11.565 EOF 00:20:11.565 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.565 { 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme$subsystem", 00:20:11.565 "trtype": "$TEST_TRANSPORT", 00:20:11.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "$NVMF_PORT", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.565 "hdgst": ${hdgst:-false}, 00:20:11.565 "ddgst": ${ddgst:-false} 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 } 00:20:11.565 EOF 00:20:11.565 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.565 { 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme$subsystem", 00:20:11.565 "trtype": "$TEST_TRANSPORT", 00:20:11.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "$NVMF_PORT", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.565 "hdgst": ${hdgst:-false}, 00:20:11.565 "ddgst": ${ddgst:-false} 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 } 00:20:11.565 EOF 00:20:11.565 )") 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:11.565 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:11.565 00:02:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme1", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.565 "hdgst": false, 00:20:11.565 "ddgst": false 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 },{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme2", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.565 "hdgst": false, 00:20:11.565 "ddgst": false 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 },{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme3", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.565 "hdgst": false, 00:20:11.565 "ddgst": false 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 },{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme4", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.565 "hdgst": false, 00:20:11.565 "ddgst": false 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 },{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme5", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.565 "hdgst": false, 00:20:11.565 "ddgst": false 00:20:11.565 }, 00:20:11.565 "method": "bdev_nvme_attach_controller" 00:20:11.565 },{ 00:20:11.565 "params": { 00:20:11.565 "name": "Nvme6", 00:20:11.565 "trtype": "tcp", 00:20:11.565 "traddr": "10.0.0.2", 00:20:11.565 "adrfam": "ipv4", 00:20:11.565 "trsvcid": "4420", 00:20:11.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.566 "hdgst": false, 00:20:11.566 "ddgst": false 00:20:11.566 }, 00:20:11.566 "method": "bdev_nvme_attach_controller" 00:20:11.566 },{ 00:20:11.566 "params": { 00:20:11.566 "name": "Nvme7", 00:20:11.566 "trtype": "tcp", 00:20:11.566 "traddr": "10.0.0.2", 00:20:11.566 "adrfam": "ipv4", 00:20:11.566 "trsvcid": "4420", 00:20:11.566 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.566 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.566 "hdgst": false, 00:20:11.566 "ddgst": false 00:20:11.566 }, 00:20:11.566 "method": "bdev_nvme_attach_controller" 00:20:11.566 },{ 00:20:11.566 "params": { 00:20:11.566 "name": "Nvme8", 00:20:11.566 "trtype": "tcp", 00:20:11.566 "traddr": "10.0.0.2", 00:20:11.566 "adrfam": "ipv4", 00:20:11.566 "trsvcid": "4420", 00:20:11.566 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.566 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.566 "hdgst": false, 00:20:11.566 "ddgst": false 00:20:11.566 }, 00:20:11.566 "method": "bdev_nvme_attach_controller" 00:20:11.566 },{ 00:20:11.566 "params": { 00:20:11.566 "name": "Nvme9", 00:20:11.566 "trtype": "tcp", 00:20:11.566 "traddr": "10.0.0.2", 00:20:11.566 "adrfam": "ipv4", 00:20:11.566 "trsvcid": "4420", 00:20:11.566 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.566 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.566 "hdgst": false, 00:20:11.566 "ddgst": false 00:20:11.566 }, 00:20:11.566 "method": "bdev_nvme_attach_controller" 00:20:11.566 },{ 00:20:11.566 "params": { 00:20:11.566 "name": "Nvme10", 00:20:11.566 "trtype": "tcp", 00:20:11.566 "traddr": "10.0.0.2", 00:20:11.566 "adrfam": "ipv4", 00:20:11.566 "trsvcid": "4420", 00:20:11.566 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.566 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.566 "hdgst": false, 00:20:11.566 "ddgst": false 00:20:11.566 }, 00:20:11.566 "method": "bdev_nvme_attach_controller" 00:20:11.566 }' 00:20:11.566 [2024-05-15 00:02:11.977226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.566 [2024-05-15 00:02:12.045104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.462 Running I/O for 10 seconds... 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:14.026 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:14.027 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3630960 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3630960 ']' 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3630960 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3630960 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3630960' 00:20:14.295 killing process with pid 3630960 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3630960 00:20:14.295 [2024-05-15 00:02:14.825836] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:14.295 00:02:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3630960 00:20:14.295 [2024-05-15 00:02:14.826258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.295 [2024-05-15 00:02:14.826723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.826846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c62c0 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.828081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dfd00 is same with the state(5) to be set 00:20:14.296 [2024-05-15 00:02:14.829998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.296 [2024-05-15 00:02:14.830716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.296 [2024-05-15 00:02:14.830725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.830983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.830994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.297 [2024-05-15 00:02:14.831332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.831737] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e3f30 was disconnected and freed. reset controller. 00:20:14.297 [2024-05-15 00:02:14.833214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:14.297 [2024-05-15 00:02:14.833272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278650 (9): Bad file descriptor 00:20:14.297 [2024-05-15 00:02:14.833324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ca9f0 is same with the state(5) to be set 00:20:14.297 [2024-05-15 00:02:14.833429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.297 [2024-05-15 00:02:14.833500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.297 [2024-05-15 00:02:14.833509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395240 is same with the state(5) to be set 00:20:14.297 [2024-05-15 00:02:14.833619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.833982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.833993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.298 [2024-05-15 00:02:14.834442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.298 [2024-05-15 00:02:14.834451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.299 [2024-05-15 00:02:14.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.299 [2024-05-15 00:02:14.834966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with [2024-05-15 00:02:14.834974] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1376be0 was disconnected and frthe state(5) to be set 00:20:14.299 eed. reset controller. 00:20:14.299 [2024-05-15 00:02:14.834991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.299 [2024-05-15 00:02:14.835175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.835542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c6c00 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.836998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.837007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.300 [2024-05-15 00:02:14.837015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with [2024-05-15 00:02:14.837085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controllethe state(5) to be set 00:20:14.301 r 00:20:14.301 [2024-05-15 00:02:14.837107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395240 (9): Bad file descriptor 00:20:14.301 [2024-05-15 00:02:14.837124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c70a0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.301 [2024-05-15 00:02:14.837789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.301 [2024-05-15 00:02:14.837789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278650 with addr=10.0.0.2, port=4420 00:20:14.301 [2024-05-15 00:02:14.837813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278650 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.837869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7540 is same with [2024-05-15 00:02:14.837869] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.301 the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.838320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c79e0 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.838328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278650 (9): Bad file descriptor 00:20:14.301 [2024-05-15 00:02:14.839225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.301 [2024-05-15 00:02:14.839311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.301 [2024-05-15 00:02:14.839604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395240 with addr=10.0.0.2, port=4420 00:20:14.301 [2024-05-15 00:02:14.839613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with [2024-05-15 00:02:14.839623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395240 is same the state(5) to be set 00:20:14.301 with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:14.301 [2024-05-15 00:02:14.839641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:14.301 [2024-05-15 00:02:14.839650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.301 [2024-05-15 00:02:14.839657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:14.301 [2024-05-15 00:02:14.839659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.839728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7e80 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840018] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.302 [2024-05-15 00:02:14.840042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395240 (9): Bad file descriptor 00:20:14.302 [2024-05-15 00:02:14.840093] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.302 [2024-05-15 00:02:14.840257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-05-15 00:02:14.840492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 00:02:14.840505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:20:14.302 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-05-15 00:02:14.840564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 00:02:14.840574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-05-15 00:02:14.840631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-05-15 00:02:14.840654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:14.302 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 [2024-05-15 00:02:14.840703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 [2024-05-15 00:02:14.840713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-05-15 00:02:14.840722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 00:02:14.840733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.302 the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.302 [2024-05-15 00:02:14.840746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-05-15 00:02:14.840772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:14.303 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:20:14.303 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:14.303 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with [2024-05-15 00:02:14.840909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1the state(5) to be set 00:20:14.303 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.840958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.840968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-05-15 00:02:14.840977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.840987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 00:02:14.840987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-05-15 00:02:14.841045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 00:02:14.841057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8320 is same with the state(5) to be set 00:20:14.303 [2024-05-15 00:02:14.841090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.303 [2024-05-15 00:02:14.841161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.303 [2024-05-15 00:02:14.841171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.304 [2024-05-15 00:02:14.841628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with [2024-05-15 00:02:14.841633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:14.304 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.304 [2024-05-15 00:02:14.841644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with [2024-05-15 00:02:14.841644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3770 is same the state(5) to be set 00:20:14.304 with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841699] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11c3770 was disconnected and fr[2024-05-15 00:02:14.841699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with eed. reset controller. 00:20:14.304 the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.304 [2024-05-15 00:02:14.841861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:14.304 [2024-05-15 00:02:14.841874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:14.304 [2024-05-15 00:02:14.841884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:14.304 [2024-05-15 00:02:14.841960] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.305 [2024-05-15 00:02:14.842999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.305 [2024-05-15 00:02:14.843020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:14.305 [2024-05-15 00:02:14.843055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12119a0 (9): Bad file descriptor 00:20:14.305 [2024-05-15 00:02:14.843101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.843940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.843977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.305 [2024-05-15 00:02:14.844829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.305 [2024-05-15 00:02:14.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.844901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.844937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.844972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.845952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.845986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.846526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.846559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.853547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.853884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c87c0 is same with the state(5) to be set 00:20:14.306 [2024-05-15 00:02:14.858922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.306 [2024-05-15 00:02:14.858939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.306 [2024-05-15 00:02:14.858954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.307 [2024-05-15 00:02:14.858967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859037] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11c2270 was disconnected and freed. reset controller. 00:20:14.307 [2024-05-15 00:02:14.859265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed310 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.859413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208500 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.859541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ca9f0 (9): Bad file descriptor 00:20:14.307 [2024-05-15 00:02:14.859576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6010 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.859716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed4f0 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.859849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.859954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385900 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.859989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.860004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.860018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.860030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.860043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.860055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.860068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.307 [2024-05-15 00:02:14.860080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.307 [2024-05-15 00:02:14.860093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386200 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.861787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:14.307 [2024-05-15 00:02:14.861817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed4f0 (9): Bad file descriptor 00:20:14.307 [2024-05-15 00:02:14.862283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.862692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.862708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12119a0 with addr=10.0.0.2, port=4420 00:20:14.307 [2024-05-15 00:02:14.862721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12119a0 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.862800] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.307 [2024-05-15 00:02:14.862861] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.307 [2024-05-15 00:02:14.862933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:14.307 [2024-05-15 00:02:14.862951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:14.307 [2024-05-15 00:02:14.862990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12119a0 (9): Bad file descriptor 00:20:14.307 [2024-05-15 00:02:14.864143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.864387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.864405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ed4f0 with addr=10.0.0.2, port=4420 00:20:14.307 [2024-05-15 00:02:14.864420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed4f0 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.864697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.864992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.865009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278650 with addr=10.0.0.2, port=4420 00:20:14.307 [2024-05-15 00:02:14.865022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278650 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.865381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.865827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.307 [2024-05-15 00:02:14.865845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395240 with addr=10.0.0.2, port=4420 00:20:14.307 [2024-05-15 00:02:14.865859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395240 is same with the state(5) to be set 00:20:14.307 [2024-05-15 00:02:14.865874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:14.307 [2024-05-15 00:02:14.865886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:14.307 [2024-05-15 00:02:14.865900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:14.307 [2024-05-15 00:02:14.866009] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.307 [2024-05-15 00:02:14.866031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.307 [2024-05-15 00:02:14.866048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed4f0 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.866065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278650 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.866081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395240 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.866147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:14.308 [2024-05-15 00:02:14.866163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:14.308 [2024-05-15 00:02:14.866177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:14.308 [2024-05-15 00:02:14.866201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:14.308 [2024-05-15 00:02:14.866215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:14.308 [2024-05-15 00:02:14.866227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:14.308 [2024-05-15 00:02:14.866243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:14.308 [2024-05-15 00:02:14.866256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:14.308 [2024-05-15 00:02:14.866269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:14.308 [2024-05-15 00:02:14.866319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.308 [2024-05-15 00:02:14.866333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.308 [2024-05-15 00:02:14.866344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.308 [2024-05-15 00:02:14.869275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed310 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.869303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1208500 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.869333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f6010 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.869362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1385900 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.869387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386200 (9): Bad file descriptor 00:20:14.308 [2024-05-15 00:02:14.869509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.869975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.869991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.308 [2024-05-15 00:02:14.870392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.308 [2024-05-15 00:02:14.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.870970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.870986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.309 [2024-05-15 00:02:14.871384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.309 [2024-05-15 00:02:14.871399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eacc0 is same with the state(5) to be set 00:20:14.309 [2024-05-15 00:02:14.872707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:14.309 [2024-05-15 00:02:14.872731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.309 [2024-05-15 00:02:14.873237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.873669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.873687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12119a0 with addr=10.0.0.2, port=4420 00:20:14.309 [2024-05-15 00:02:14.873702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12119a0 is same with the state(5) to be set 00:20:14.309 [2024-05-15 00:02:14.874054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.874481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.874498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ca9f0 with addr=10.0.0.2, port=4420 00:20:14.309 [2024-05-15 00:02:14.874511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ca9f0 is same with the state(5) to be set 00:20:14.309 [2024-05-15 00:02:14.874819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12119a0 (9): Bad file descriptor 00:20:14.309 [2024-05-15 00:02:14.874838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ca9f0 (9): Bad file descriptor 00:20:14.309 [2024-05-15 00:02:14.874917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:14.309 [2024-05-15 00:02:14.874932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:14.309 [2024-05-15 00:02:14.874945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:14.309 [2024-05-15 00:02:14.874960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.309 [2024-05-15 00:02:14.874972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.309 [2024-05-15 00:02:14.874984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.309 [2024-05-15 00:02:14.875036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:14.309 [2024-05-15 00:02:14.875052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:14.309 [2024-05-15 00:02:14.875067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:14.309 [2024-05-15 00:02:14.875081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.309 [2024-05-15 00:02:14.875091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.309 [2024-05-15 00:02:14.875594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.875891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.875908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395240 with addr=10.0.0.2, port=4420 00:20:14.309 [2024-05-15 00:02:14.875921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395240 is same with the state(5) to be set 00:20:14.309 [2024-05-15 00:02:14.876343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.876726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.309 [2024-05-15 00:02:14.876750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278650 with addr=10.0.0.2, port=4420 00:20:14.309 [2024-05-15 00:02:14.876762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278650 is same with the state(5) to be set 00:20:14.574 [2024-05-15 00:02:14.877143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.574 [2024-05-15 00:02:14.877478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.574 [2024-05-15 00:02:14.877495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ed4f0 with addr=10.0.0.2, port=4420 00:20:14.574 [2024-05-15 00:02:14.877507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed4f0 is same with the state(5) to be set 00:20:14.574 [2024-05-15 00:02:14.877551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395240 (9): Bad file descriptor 00:20:14.574 [2024-05-15 00:02:14.877567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278650 (9): Bad file descriptor 00:20:14.574 [2024-05-15 00:02:14.877581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed4f0 (9): Bad file descriptor 00:20:14.574 [2024-05-15 00:02:14.877634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:14.574 [2024-05-15 00:02:14.877648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:14.574 [2024-05-15 00:02:14.877661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:14.574 [2024-05-15 00:02:14.877676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:14.574 [2024-05-15 00:02:14.877688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:14.574 [2024-05-15 00:02:14.877700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:14.574 [2024-05-15 00:02:14.877715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:14.574 [2024-05-15 00:02:14.877726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:14.574 [2024-05-15 00:02:14.877737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:14.574 [2024-05-15 00:02:14.877781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.574 [2024-05-15 00:02:14.877792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.574 [2024-05-15 00:02:14.877802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.574 [2024-05-15 00:02:14.879402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.879983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.574 [2024-05-15 00:02:14.879997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.574 [2024-05-15 00:02:14.880009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.575 [2024-05-15 00:02:14.880985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.575 [2024-05-15 00:02:14.880999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.881011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.881037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.881051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.881063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.881077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.881089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.881102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377fc0 is same with the state(5) to be set 00:20:14.576 [2024-05-15 00:02:14.882349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.882983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.882997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.576 [2024-05-15 00:02:14.883306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.576 [2024-05-15 00:02:14.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.883986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.883997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.884010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.884021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.884034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.884045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.884057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1379500 is same with the state(5) to be set 00:20:14.577 [2024-05-15 00:02:14.885175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.577 [2024-05-15 00:02:14.885467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.577 [2024-05-15 00:02:14.885479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.885985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.885996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.578 [2024-05-15 00:02:14.886287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.578 [2024-05-15 00:02:14.886298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.886569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.886581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c4c70 is same with the state(5) to be set 00:20:14.579 [2024-05-15 00:02:14.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.887978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.887991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.579 [2024-05-15 00:02:14.888358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.579 [2024-05-15 00:02:14.888368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.888999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.889250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.889262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c6170 is same with the state(5) to be set 00:20:14.580 [2024-05-15 00:02:14.890393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.890411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.890438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.890451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.580 [2024-05-15 00:02:14.890463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.580 [2024-05-15 00:02:14.890476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.890981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.581 [2024-05-15 00:02:14.891475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.581 [2024-05-15 00:02:14.891488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.582 [2024-05-15 00:02:14.891967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.582 [2024-05-15 00:02:14.891979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e2a80 is same with the state(5) to be set 00:20:14.582 [2024-05-15 00:02:14.893699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:14.582 [2024-05-15 00:02:14.893725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:14.582 [2024-05-15 00:02:14.893739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:14.582 [2024-05-15 00:02:14.893753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:14.582 [2024-05-15 00:02:14.893845] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.582 task offset: 32256 on job bdev=Nvme10n1 fails 00:20:14.582 00:20:14.582 Latency(us) 00:20:14.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.582 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme1n1 ended in about 1.00 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme1n1 : 1.00 191.53 11.97 63.84 0.00 248462.54 19293.80 223136.97 00:20:14.582 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme2n1 ended in about 0.97 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme2n1 : 0.97 264.84 16.55 66.21 0.00 188518.85 3879.73 211392.92 00:20:14.582 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme3n1 ended in about 1.01 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme3n1 : 1.01 252.93 15.81 63.23 0.00 194599.65 18979.23 207198.62 00:20:14.582 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme4n1 ended in about 1.02 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme4n1 : 1.02 126.10 7.88 63.05 0.00 320395.95 23802.68 275146.34 00:20:14.582 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme5n1 ended in about 0.99 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme5n1 : 0.99 258.26 16.14 64.57 0.00 184430.76 18350.08 208037.48 00:20:14.582 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme6n1 ended in about 0.97 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme6n1 : 0.97 197.32 12.33 65.77 0.00 222316.88 2359.30 248302.80 00:20:14.582 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme7n1 ended in about 1.02 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme7n1 : 1.02 203.43 12.71 56.02 0.00 221484.27 18979.23 231525.58 00:20:14.582 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme8n1 ended in about 1.02 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme8n1 : 1.02 188.19 11.76 62.73 0.00 226631.07 21286.09 238236.47 00:20:14.582 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme9n1 ended in about 1.02 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme9n1 : 1.02 187.69 11.73 62.56 0.00 223587.12 21286.09 223136.97 00:20:14.582 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.582 Job: Nvme10n1 ended in about 0.96 seconds with error 00:20:14.582 Verification LBA range: start 0x0 length 0x400 00:20:14.582 Nvme10n1 : 0.96 199.30 12.46 66.43 0.00 204974.75 4875.88 244947.35 00:20:14.582 =================================================================================================================== 00:20:14.582 Total : 2069.61 129.35 634.42 0.00 218788.04 2359.30 275146.34 00:20:14.582 [2024-05-15 00:02:14.916977] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:14.582 [2024-05-15 00:02:14.917018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:14.582 [2024-05-15 00:02:14.917533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.917957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.917973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f6010 with addr=10.0.0.2, port=4420 00:20:14.582 [2024-05-15 00:02:14.917992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6010 is same with the state(5) to be set 00:20:14.582 [2024-05-15 00:02:14.918329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.918693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.918706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ed310 with addr=10.0.0.2, port=4420 00:20:14.582 [2024-05-15 00:02:14.918716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed310 is same with the state(5) to be set 00:20:14.582 [2024-05-15 00:02:14.919127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.919529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.582 [2024-05-15 00:02:14.919544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1385900 with addr=10.0.0.2, port=4420 00:20:14.582 [2024-05-15 00:02:14.919554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1385900 is same with the state(5) to be set 00:20:14.582 [2024-05-15 00:02:14.919945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.920130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.920143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1386200 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.920153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1386200 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.921277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.583 [2024-05-15 00:02:14.921297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:14.583 [2024-05-15 00:02:14.921308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:14.583 [2024-05-15 00:02:14.921319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:14.583 [2024-05-15 00:02:14.921331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:14.583 [2024-05-15 00:02:14.921835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.922260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.922274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1208500 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.922285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208500 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.922301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f6010 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.922315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed310 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.922326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1385900 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.922338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386200 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.922373] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.583 [2024-05-15 00:02:14.922388] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.583 [2024-05-15 00:02:14.922402] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.583 [2024-05-15 00:02:14.922415] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.583 [2024-05-15 00:02:14.923136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.923548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.923563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ca9f0 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.923575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ca9f0 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.923973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.924298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.924311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12119a0 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.924320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12119a0 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.924519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.924882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.924897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ed4f0 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.924909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ed4f0 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.925290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.925669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.925684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1278650 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.925696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1278650 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.926093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.926437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.583 [2024-05-15 00:02:14.926452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1395240 with addr=10.0.0.2, port=4420 00:20:14.583 [2024-05-15 00:02:14.926464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1395240 is same with the state(5) to be set 00:20:14.583 [2024-05-15 00:02:14.926480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1208500 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.926494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:14.583 [2024-05-15 00:02:14.926505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:14.583 [2024-05-15 00:02:14.926517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:14.583 [2024-05-15 00:02:14.926534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:14.583 [2024-05-15 00:02:14.926545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:14.583 [2024-05-15 00:02:14.926556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:14.583 [2024-05-15 00:02:14.926570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:14.583 [2024-05-15 00:02:14.926580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:14.583 [2024-05-15 00:02:14.926590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:14.583 [2024-05-15 00:02:14.926604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:14.583 [2024-05-15 00:02:14.926615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:14.583 [2024-05-15 00:02:14.926628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:14.583 [2024-05-15 00:02:14.926702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.583 [2024-05-15 00:02:14.926715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.583 [2024-05-15 00:02:14.926724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.583 [2024-05-15 00:02:14.926733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.583 [2024-05-15 00:02:14.926745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ca9f0 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.926758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12119a0 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.926772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed4f0 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.926786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1278650 (9): Bad file descriptor 00:20:14.583 [2024-05-15 00:02:14.926800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1395240 (9): Bad file descriptor 00:20:14.584 [2024-05-15 00:02:14.926812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.926822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.926833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:14.584 [2024-05-15 00:02:14.926877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.926890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.926901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.926912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.584 [2024-05-15 00:02:14.926924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.926934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.926945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:14.584 [2024-05-15 00:02:14.926958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.926968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.926979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:14.584 [2024-05-15 00:02:14.926991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.927001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.927011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:14.584 [2024-05-15 00:02:14.927024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.927034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.927044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:14.584 [2024-05-15 00:02:14.927073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.927084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.927096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.927106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.927115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.584 [2024-05-15 00:02:14.927184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:14.584 [2024-05-15 00:02:14.927502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.584 [2024-05-15 00:02:14.927924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.584 [2024-05-15 00:02:14.927938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1208500 with addr=10.0.0.2, port=4420 00:20:14.584 [2024-05-15 00:02:14.927951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208500 is same with the state(5) to be set 00:20:14.584 [2024-05-15 00:02:14.927981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1208500 (9): Bad file descriptor 00:20:14.584 [2024-05-15 00:02:14.928014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:14.584 [2024-05-15 00:02:14.928026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:14.584 [2024-05-15 00:02:14.928037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:14.584 [2024-05-15 00:02:14.928066] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.843 00:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:14.843 00:02:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3631276 00:20:15.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3631276) - No such process 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.780 rmmod nvme_tcp 00:20:15.780 rmmod nvme_fabrics 00:20:15.780 rmmod nvme_keyring 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.780 00:02:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.313 00:20:18.313 real 0m8.378s 00:20:18.313 user 0m21.403s 00:20:18.313 sys 0m1.748s 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.313 ************************************ 00:20:18.313 END TEST nvmf_shutdown_tc3 00:20:18.313 ************************************ 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:18.313 00:20:18.313 real 0m32.882s 00:20:18.313 user 1m19.762s 00:20:18.313 sys 0m10.191s 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:18.313 00:02:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:18.313 ************************************ 00:20:18.313 END TEST nvmf_shutdown 00:20:18.313 ************************************ 00:20:18.313 00:02:18 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.313 00:02:18 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.313 00:02:18 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:18.313 00:02:18 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:18.313 00:02:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.313 ************************************ 00:20:18.313 START TEST nvmf_multicontroller 00:20:18.313 ************************************ 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:18.313 * Looking for test storage... 00:20:18.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:18.313 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.314 00:02:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.911 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:24.912 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:24.912 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:24.912 Found net devices under 0000:af:00.0: cvl_0_0 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:24.912 Found net devices under 0000:af:00.1: cvl_0_1 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.912 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:25.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:20:25.191 00:20:25.191 --- 10.0.0.2 ping statistics --- 00:20:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.191 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:25.191 00:20:25.191 --- 10.0.0.1 ping statistics --- 00:20:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.191 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3635819 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3635819 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3635819 ']' 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.191 00:02:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:25.191 [2024-05-15 00:02:25.683329] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:25.191 [2024-05-15 00:02:25.683374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.192 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.192 [2024-05-15 00:02:25.756531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:25.450 [2024-05-15 00:02:25.829668] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.450 [2024-05-15 00:02:25.829708] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.450 [2024-05-15 00:02:25.829717] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.450 [2024-05-15 00:02:25.829725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.450 [2024-05-15 00:02:25.829732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.450 [2024-05-15 00:02:25.829836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.450 [2024-05-15 00:02:25.829941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.450 [2024-05-15 00:02:25.829942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.015 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:26.015 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:26.015 00:02:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.015 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 [2024-05-15 00:02:26.526602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 Malloc0 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 [2024-05-15 00:02:26.589275] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:26.016 [2024-05-15 00:02:26.589535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 [2024-05-15 00:02:26.597434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.016 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 Malloc1 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3635955 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3635955 /var/tmp/bdevperf.sock 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3635955 ']' 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.274 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:26.275 00:02:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 NVMe0n1 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.208 1 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.209 request: 00:20:27.209 { 00:20:27.209 "name": "NVMe0", 00:20:27.209 "trtype": "tcp", 00:20:27.209 "traddr": "10.0.0.2", 00:20:27.209 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:27.209 "hostaddr": "10.0.0.2", 00:20:27.209 "hostsvcid": "60000", 00:20:27.209 "adrfam": "ipv4", 00:20:27.209 "trsvcid": "4420", 00:20:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.209 "method": "bdev_nvme_attach_controller", 00:20:27.209 "req_id": 1 00:20:27.209 } 00:20:27.209 Got JSON-RPC error response 00:20:27.209 response: 00:20:27.209 { 00:20:27.209 "code": -114, 00:20:27.209 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:27.209 } 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.209 request: 00:20:27.209 { 00:20:27.209 "name": "NVMe0", 00:20:27.209 "trtype": "tcp", 00:20:27.209 "traddr": "10.0.0.2", 00:20:27.209 "hostaddr": "10.0.0.2", 00:20:27.209 "hostsvcid": "60000", 00:20:27.209 "adrfam": "ipv4", 00:20:27.209 "trsvcid": "4420", 00:20:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.209 "method": "bdev_nvme_attach_controller", 00:20:27.209 "req_id": 1 00:20:27.209 } 00:20:27.209 Got JSON-RPC error response 00:20:27.209 response: 00:20:27.209 { 00:20:27.209 "code": -114, 00:20:27.209 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:27.209 } 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.209 request: 00:20:27.209 { 00:20:27.209 "name": "NVMe0", 00:20:27.209 "trtype": "tcp", 00:20:27.209 "traddr": "10.0.0.2", 00:20:27.209 "hostaddr": "10.0.0.2", 00:20:27.209 "hostsvcid": "60000", 00:20:27.209 "adrfam": "ipv4", 00:20:27.209 "trsvcid": "4420", 00:20:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.209 "multipath": "disable", 00:20:27.209 "method": "bdev_nvme_attach_controller", 00:20:27.209 "req_id": 1 00:20:27.209 } 00:20:27.209 Got JSON-RPC error response 00:20:27.209 response: 00:20:27.209 { 00:20:27.209 "code": -114, 00:20:27.209 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:27.209 } 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.209 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 request: 00:20:27.467 { 00:20:27.467 "name": "NVMe0", 00:20:27.467 "trtype": "tcp", 00:20:27.467 "traddr": "10.0.0.2", 00:20:27.467 "hostaddr": "10.0.0.2", 00:20:27.467 "hostsvcid": "60000", 00:20:27.467 "adrfam": "ipv4", 00:20:27.467 "trsvcid": "4420", 00:20:27.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.467 "multipath": "failover", 00:20:27.467 "method": "bdev_nvme_attach_controller", 00:20:27.467 "req_id": 1 00:20:27.467 } 00:20:27.467 Got JSON-RPC error response 00:20:27.467 response: 00:20:27.467 { 00:20:27.467 "code": -114, 00:20:27.467 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:27.467 } 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 00:20:27.467 00:02:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:27.467 00:02:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.843 0 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3635955 ']' 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3635955' 00:20:28.843 killing process with pid 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3635955 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:28.843 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:20:29.102 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:29.102 [2024-05-15 00:02:26.703169] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:29.102 [2024-05-15 00:02:26.703231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635955 ] 00:20:29.102 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.102 [2024-05-15 00:02:26.775892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.102 [2024-05-15 00:02:26.844804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.102 [2024-05-15 00:02:27.995729] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 536d340f-3012-4f5b-90f0-6c0092f0e715 already exists 00:20:29.102 [2024-05-15 00:02:27.995759] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:536d340f-3012-4f5b-90f0-6c0092f0e715 alias for bdev NVMe1n1 00:20:29.102 [2024-05-15 00:02:27.995771] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:29.102 Running I/O for 1 seconds... 00:20:29.102 00:20:29.102 Latency(us) 00:20:29.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.102 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:29.102 NVMe0n1 : 1.00 25280.68 98.75 0.00 0.00 5050.64 1913.65 16043.21 00:20:29.102 =================================================================================================================== 00:20:29.102 Total : 25280.68 98.75 0.00 0.00 5050.64 1913.65 16043.21 00:20:29.102 Received shutdown signal, test time was about 1.000000 seconds 00:20:29.102 00:20:29.102 Latency(us) 00:20:29.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.102 =================================================================================================================== 00:20:29.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.102 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.102 rmmod nvme_tcp 00:20:29.102 rmmod nvme_fabrics 00:20:29.102 rmmod nvme_keyring 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3635819 ']' 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3635819 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3635819 ']' 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3635819 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3635819 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3635819' 00:20:29.102 killing process with pid 3635819 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3635819 00:20:29.102 [2024-05-15 00:02:29.561457] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:29.102 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3635819 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.361 00:02:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.895 00:02:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.895 00:20:31.895 real 0m13.252s 00:20:31.895 user 0m16.565s 00:20:31.895 sys 0m6.240s 00:20:31.895 00:02:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:31.895 00:02:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:31.895 ************************************ 00:20:31.895 END TEST nvmf_multicontroller 00:20:31.895 ************************************ 00:20:31.895 00:02:31 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:31.895 00:02:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:31.895 00:02:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:31.895 00:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.895 ************************************ 00:20:31.895 START TEST nvmf_aer 00:20:31.895 ************************************ 00:20:31.895 00:02:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:31.895 * Looking for test storage... 00:20:31.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.895 00:02:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:38.451 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:38.451 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:38.451 Found net devices under 0000:af:00.0: cvl_0_0 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:38.451 Found net devices under 0000:af:00.1: cvl_0_1 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.451 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:20:38.452 00:20:38.452 --- 10.0.0.2 ping statistics --- 00:20:38.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.452 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:20:38.452 00:20:38.452 --- 10.0.0.1 ping statistics --- 00:20:38.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.452 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3640095 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3640095 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3640095 ']' 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.452 00:02:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:38.452 [2024-05-15 00:02:38.969010] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:38.452 [2024-05-15 00:02:38.969055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.452 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.452 [2024-05-15 00:02:39.037060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.710 [2024-05-15 00:02:39.108849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.710 [2024-05-15 00:02:39.108883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.710 [2024-05-15 00:02:39.108893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.710 [2024-05-15 00:02:39.108902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.710 [2024-05-15 00:02:39.108909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.710 [2024-05-15 00:02:39.108951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.710 [2024-05-15 00:02:39.109065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.710 [2024-05-15 00:02:39.109152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.710 [2024-05-15 00:02:39.109153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.276 [2024-05-15 00:02:39.835057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.276 Malloc0 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.276 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:39.277 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.277 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.535 [2024-05-15 00:02:39.889491] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.535 [2024-05-15 00:02:39.889759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.535 [ 00:20:39.535 { 00:20:39.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:39.535 "subtype": "Discovery", 00:20:39.535 "listen_addresses": [], 00:20:39.535 "allow_any_host": true, 00:20:39.535 "hosts": [] 00:20:39.535 }, 00:20:39.535 { 00:20:39.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.535 "subtype": "NVMe", 00:20:39.535 "listen_addresses": [ 00:20:39.535 { 00:20:39.535 "trtype": "TCP", 00:20:39.535 "adrfam": "IPv4", 00:20:39.535 "traddr": "10.0.0.2", 00:20:39.535 "trsvcid": "4420" 00:20:39.535 } 00:20:39.535 ], 00:20:39.535 "allow_any_host": true, 00:20:39.535 "hosts": [], 00:20:39.535 "serial_number": "SPDK00000000000001", 00:20:39.535 "model_number": "SPDK bdev Controller", 00:20:39.535 "max_namespaces": 2, 00:20:39.535 "min_cntlid": 1, 00:20:39.535 "max_cntlid": 65519, 00:20:39.535 "namespaces": [ 00:20:39.535 { 00:20:39.535 "nsid": 1, 00:20:39.535 "bdev_name": "Malloc0", 00:20:39.535 "name": "Malloc0", 00:20:39.535 "nguid": "F1DF10369B0E461B820B8C72B4BCC9CF", 00:20:39.535 "uuid": "f1df1036-9b0e-461b-820b-8c72b4bcc9cf" 00:20:39.535 } 00:20:39.535 ] 00:20:39.535 } 00:20:39.535 ] 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3640375 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:20:39.535 00:02:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:39.535 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.535 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.535 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:20:39.535 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:20:39.535 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 Malloc1 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 Asynchronous Event Request test 00:20:39.793 Attaching to 10.0.0.2 00:20:39.793 Attached to 10.0.0.2 00:20:39.793 Registering asynchronous event callbacks... 00:20:39.793 Starting namespace attribute notice tests for all controllers... 00:20:39.793 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:39.793 aer_cb - Changed Namespace 00:20:39.793 Cleaning up... 00:20:39.793 [ 00:20:39.793 { 00:20:39.793 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:39.793 "subtype": "Discovery", 00:20:39.793 "listen_addresses": [], 00:20:39.793 "allow_any_host": true, 00:20:39.793 "hosts": [] 00:20:39.793 }, 00:20:39.793 { 00:20:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.793 "subtype": "NVMe", 00:20:39.793 "listen_addresses": [ 00:20:39.793 { 00:20:39.793 "trtype": "TCP", 00:20:39.793 "adrfam": "IPv4", 00:20:39.793 "traddr": "10.0.0.2", 00:20:39.793 "trsvcid": "4420" 00:20:39.793 } 00:20:39.793 ], 00:20:39.793 "allow_any_host": true, 00:20:39.793 "hosts": [], 00:20:39.793 "serial_number": "SPDK00000000000001", 00:20:39.793 "model_number": "SPDK bdev Controller", 00:20:39.793 "max_namespaces": 2, 00:20:39.793 "min_cntlid": 1, 00:20:39.793 "max_cntlid": 65519, 00:20:39.793 "namespaces": [ 00:20:39.793 { 00:20:39.793 "nsid": 1, 00:20:39.793 "bdev_name": "Malloc0", 00:20:39.793 "name": "Malloc0", 00:20:39.793 "nguid": "F1DF10369B0E461B820B8C72B4BCC9CF", 00:20:39.793 "uuid": "f1df1036-9b0e-461b-820b-8c72b4bcc9cf" 00:20:39.793 }, 00:20:39.793 { 00:20:39.793 "nsid": 2, 00:20:39.793 "bdev_name": "Malloc1", 00:20:39.793 "name": "Malloc1", 00:20:39.793 "nguid": "8448D9E325DE47B6BA8AFAC9D24E7AC9", 00:20:39.793 "uuid": "8448d9e3-25de-47b6-ba8a-fac9d24e7ac9" 00:20:39.793 } 00:20:39.793 ] 00:20:39.793 } 00:20:39.793 ] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3640375 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.793 rmmod nvme_tcp 00:20:39.793 rmmod nvme_fabrics 00:20:39.793 rmmod nvme_keyring 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3640095 ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3640095 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3640095 ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3640095 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3640095 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3640095' 00:20:39.793 killing process with pid 3640095 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3640095 00:20:39.793 [2024-05-15 00:02:40.378754] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:39.793 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3640095 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.053 00:02:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.588 00:02:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.588 00:20:42.588 real 0m10.682s 00:20:42.588 user 0m7.669s 00:20:42.588 sys 0m5.711s 00:20:42.588 00:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:42.588 00:02:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 ************************************ 00:20:42.588 END TEST nvmf_aer 00:20:42.588 ************************************ 00:20:42.588 00:02:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:42.588 00:02:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:42.588 00:02:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:42.588 00:02:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 ************************************ 00:20:42.588 START TEST nvmf_async_init 00:20:42.588 ************************************ 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:42.588 * Looking for test storage... 00:20:42.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.588 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=37a68cb70a014b7e9da45a116ab77604 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.589 00:02:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:49.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:49.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:49.155 Found net devices under 0000:af:00.0: cvl_0_0 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:49.155 Found net devices under 0000:af:00.1: cvl_0_1 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:49.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:20:49.155 00:20:49.155 --- 10.0.0.2 ping statistics --- 00:20:49.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.155 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:20:49.155 00:20:49.155 --- 10.0.0.1 ping statistics --- 00:20:49.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.155 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.155 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3644067 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3644067 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3644067 ']' 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.156 00:02:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:49.156 [2024-05-15 00:02:49.527606] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:20:49.156 [2024-05-15 00:02:49.527650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.156 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.156 [2024-05-15 00:02:49.601421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.156 [2024-05-15 00:02:49.673556] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.156 [2024-05-15 00:02:49.673588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.156 [2024-05-15 00:02:49.673598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.156 [2024-05-15 00:02:49.673610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.156 [2024-05-15 00:02:49.673617] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.156 [2024-05-15 00:02:49.673638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.770 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.770 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:20:49.770 00:02:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.770 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.770 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 [2024-05-15 00:02:50.380179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 null0 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 37a68cb70a014b7e9da45a116ab77604 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.039 [2024-05-15 00:02:50.420228] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:50.039 [2024-05-15 00:02:50.420433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.039 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.298 nvme0n1 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.298 [ 00:20:50.298 { 00:20:50.298 "name": "nvme0n1", 00:20:50.298 "aliases": [ 00:20:50.298 "37a68cb7-0a01-4b7e-9da4-5a116ab77604" 00:20:50.298 ], 00:20:50.298 "product_name": "NVMe disk", 00:20:50.298 "block_size": 512, 00:20:50.298 "num_blocks": 2097152, 00:20:50.298 "uuid": "37a68cb7-0a01-4b7e-9da4-5a116ab77604", 00:20:50.298 "assigned_rate_limits": { 00:20:50.298 "rw_ios_per_sec": 0, 00:20:50.298 "rw_mbytes_per_sec": 0, 00:20:50.298 "r_mbytes_per_sec": 0, 00:20:50.298 "w_mbytes_per_sec": 0 00:20:50.298 }, 00:20:50.298 "claimed": false, 00:20:50.298 "zoned": false, 00:20:50.298 "supported_io_types": { 00:20:50.298 "read": true, 00:20:50.298 "write": true, 00:20:50.298 "unmap": false, 00:20:50.298 "write_zeroes": true, 00:20:50.298 "flush": true, 00:20:50.298 "reset": true, 00:20:50.298 "compare": true, 00:20:50.298 "compare_and_write": true, 00:20:50.298 "abort": true, 00:20:50.298 "nvme_admin": true, 00:20:50.298 "nvme_io": true 00:20:50.298 }, 00:20:50.298 "memory_domains": [ 00:20:50.298 { 00:20:50.298 "dma_device_id": "system", 00:20:50.298 "dma_device_type": 1 00:20:50.298 } 00:20:50.298 ], 00:20:50.298 "driver_specific": { 00:20:50.298 "nvme": [ 00:20:50.298 { 00:20:50.298 "trid": { 00:20:50.298 "trtype": "TCP", 00:20:50.298 "adrfam": "IPv4", 00:20:50.298 "traddr": "10.0.0.2", 00:20:50.298 "trsvcid": "4420", 00:20:50.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:50.298 }, 00:20:50.298 "ctrlr_data": { 00:20:50.298 "cntlid": 1, 00:20:50.298 "vendor_id": "0x8086", 00:20:50.298 "model_number": "SPDK bdev Controller", 00:20:50.298 "serial_number": "00000000000000000000", 00:20:50.298 "firmware_revision": "24.05", 00:20:50.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.298 "oacs": { 00:20:50.298 "security": 0, 00:20:50.298 "format": 0, 00:20:50.298 "firmware": 0, 00:20:50.298 "ns_manage": 0 00:20:50.298 }, 00:20:50.298 "multi_ctrlr": true, 00:20:50.298 "ana_reporting": false 00:20:50.298 }, 00:20:50.298 "vs": { 00:20:50.298 "nvme_version": "1.3" 00:20:50.298 }, 00:20:50.298 "ns_data": { 00:20:50.298 "id": 1, 00:20:50.298 "can_share": true 00:20:50.298 } 00:20:50.298 } 00:20:50.298 ], 00:20:50.298 "mp_policy": "active_passive" 00:20:50.298 } 00:20:50.298 } 00:20:50.298 ] 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.298 [2024-05-15 00:02:50.680940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:50.298 [2024-05-15 00:02:50.681009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1972e70 (9): Bad file descriptor 00:20:50.298 [2024-05-15 00:02:50.813277] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.298 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.298 [ 00:20:50.298 { 00:20:50.298 "name": "nvme0n1", 00:20:50.298 "aliases": [ 00:20:50.298 "37a68cb7-0a01-4b7e-9da4-5a116ab77604" 00:20:50.298 ], 00:20:50.298 "product_name": "NVMe disk", 00:20:50.298 "block_size": 512, 00:20:50.298 "num_blocks": 2097152, 00:20:50.298 "uuid": "37a68cb7-0a01-4b7e-9da4-5a116ab77604", 00:20:50.298 "assigned_rate_limits": { 00:20:50.298 "rw_ios_per_sec": 0, 00:20:50.298 "rw_mbytes_per_sec": 0, 00:20:50.298 "r_mbytes_per_sec": 0, 00:20:50.298 "w_mbytes_per_sec": 0 00:20:50.298 }, 00:20:50.298 "claimed": false, 00:20:50.298 "zoned": false, 00:20:50.298 "supported_io_types": { 00:20:50.298 "read": true, 00:20:50.298 "write": true, 00:20:50.298 "unmap": false, 00:20:50.298 "write_zeroes": true, 00:20:50.298 "flush": true, 00:20:50.298 "reset": true, 00:20:50.298 "compare": true, 00:20:50.298 "compare_and_write": true, 00:20:50.298 "abort": true, 00:20:50.298 "nvme_admin": true, 00:20:50.298 "nvme_io": true 00:20:50.298 }, 00:20:50.298 "memory_domains": [ 00:20:50.298 { 00:20:50.298 "dma_device_id": "system", 00:20:50.298 "dma_device_type": 1 00:20:50.298 } 00:20:50.298 ], 00:20:50.298 "driver_specific": { 00:20:50.298 "nvme": [ 00:20:50.298 { 00:20:50.298 "trid": { 00:20:50.298 "trtype": "TCP", 00:20:50.298 "adrfam": "IPv4", 00:20:50.298 "traddr": "10.0.0.2", 00:20:50.298 "trsvcid": "4420", 00:20:50.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:50.298 }, 00:20:50.298 "ctrlr_data": { 00:20:50.298 "cntlid": 2, 00:20:50.298 "vendor_id": "0x8086", 00:20:50.298 "model_number": "SPDK bdev Controller", 00:20:50.298 "serial_number": "00000000000000000000", 00:20:50.298 "firmware_revision": "24.05", 00:20:50.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.298 "oacs": { 00:20:50.298 "security": 0, 00:20:50.298 "format": 0, 00:20:50.298 "firmware": 0, 00:20:50.298 "ns_manage": 0 00:20:50.298 }, 00:20:50.298 "multi_ctrlr": true, 00:20:50.299 "ana_reporting": false 00:20:50.299 }, 00:20:50.299 "vs": { 00:20:50.299 "nvme_version": "1.3" 00:20:50.299 }, 00:20:50.299 "ns_data": { 00:20:50.299 "id": 1, 00:20:50.299 "can_share": true 00:20:50.299 } 00:20:50.299 } 00:20:50.299 ], 00:20:50.299 "mp_policy": "active_passive" 00:20:50.299 } 00:20:50.299 } 00:20:50.299 ] 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LAtdqDtfZV 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LAtdqDtfZV 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.299 [2024-05-15 00:02:50.877542] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.299 [2024-05-15 00:02:50.877662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LAtdqDtfZV 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.299 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.299 [2024-05-15 00:02:50.885562] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LAtdqDtfZV 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.558 [2024-05-15 00:02:50.893581] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.558 [2024-05-15 00:02:50.893619] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:50.558 nvme0n1 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.558 [ 00:20:50.558 { 00:20:50.558 "name": "nvme0n1", 00:20:50.558 "aliases": [ 00:20:50.558 "37a68cb7-0a01-4b7e-9da4-5a116ab77604" 00:20:50.558 ], 00:20:50.558 "product_name": "NVMe disk", 00:20:50.558 "block_size": 512, 00:20:50.558 "num_blocks": 2097152, 00:20:50.558 "uuid": "37a68cb7-0a01-4b7e-9da4-5a116ab77604", 00:20:50.558 "assigned_rate_limits": { 00:20:50.558 "rw_ios_per_sec": 0, 00:20:50.558 "rw_mbytes_per_sec": 0, 00:20:50.558 "r_mbytes_per_sec": 0, 00:20:50.558 "w_mbytes_per_sec": 0 00:20:50.558 }, 00:20:50.558 "claimed": false, 00:20:50.558 "zoned": false, 00:20:50.558 "supported_io_types": { 00:20:50.558 "read": true, 00:20:50.558 "write": true, 00:20:50.558 "unmap": false, 00:20:50.558 "write_zeroes": true, 00:20:50.558 "flush": true, 00:20:50.558 "reset": true, 00:20:50.558 "compare": true, 00:20:50.558 "compare_and_write": true, 00:20:50.558 "abort": true, 00:20:50.558 "nvme_admin": true, 00:20:50.558 "nvme_io": true 00:20:50.558 }, 00:20:50.558 "memory_domains": [ 00:20:50.558 { 00:20:50.558 "dma_device_id": "system", 00:20:50.558 "dma_device_type": 1 00:20:50.558 } 00:20:50.558 ], 00:20:50.558 "driver_specific": { 00:20:50.558 "nvme": [ 00:20:50.558 { 00:20:50.558 "trid": { 00:20:50.558 "trtype": "TCP", 00:20:50.558 "adrfam": "IPv4", 00:20:50.558 "traddr": "10.0.0.2", 00:20:50.558 "trsvcid": "4421", 00:20:50.558 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:50.558 }, 00:20:50.558 "ctrlr_data": { 00:20:50.558 "cntlid": 3, 00:20:50.558 "vendor_id": "0x8086", 00:20:50.558 "model_number": "SPDK bdev Controller", 00:20:50.558 "serial_number": "00000000000000000000", 00:20:50.558 "firmware_revision": "24.05", 00:20:50.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.558 "oacs": { 00:20:50.558 "security": 0, 00:20:50.558 "format": 0, 00:20:50.558 "firmware": 0, 00:20:50.558 "ns_manage": 0 00:20:50.558 }, 00:20:50.558 "multi_ctrlr": true, 00:20:50.558 "ana_reporting": false 00:20:50.558 }, 00:20:50.558 "vs": { 00:20:50.558 "nvme_version": "1.3" 00:20:50.558 }, 00:20:50.558 "ns_data": { 00:20:50.558 "id": 1, 00:20:50.558 "can_share": true 00:20:50.558 } 00:20:50.558 } 00:20:50.558 ], 00:20:50.558 "mp_policy": "active_passive" 00:20:50.558 } 00:20:50.558 } 00:20:50.558 ] 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.LAtdqDtfZV 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:50.558 00:02:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.559 rmmod nvme_tcp 00:20:50.559 rmmod nvme_fabrics 00:20:50.559 rmmod nvme_keyring 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3644067 ']' 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3644067 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3644067 ']' 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3644067 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3644067 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3644067' 00:20:50.559 killing process with pid 3644067 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3644067 00:20:50.559 [2024-05-15 00:02:51.110826] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:50.559 [2024-05-15 00:02:51.110850] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:50.559 [2024-05-15 00:02:51.110860] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:50.559 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3644067 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.817 00:02:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.347 00:02:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.347 00:20:53.347 real 0m10.606s 00:20:53.347 user 0m3.794s 00:20:53.347 sys 0m5.380s 00:20:53.347 00:02:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.347 00:02:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:53.347 ************************************ 00:20:53.347 END TEST nvmf_async_init 00:20:53.347 ************************************ 00:20:53.347 00:02:53 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:53.347 00:02:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:53.347 00:02:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:53.347 00:02:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:53.347 ************************************ 00:20:53.347 START TEST dma 00:20:53.347 ************************************ 00:20:53.347 00:02:53 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:53.347 * Looking for test storage... 00:20:53.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:53.347 00:02:53 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.347 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.348 00:02:53 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.348 00:02:53 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.348 00:02:53 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.348 00:02:53 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:53.348 00:02:53 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.348 00:02:53 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.348 00:02:53 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:53.348 00:02:53 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:53.348 00:20:53.348 real 0m0.140s 00:20:53.348 user 0m0.052s 00:20:53.348 sys 0m0.099s 00:20:53.348 00:02:53 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.348 00:02:53 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:53.348 ************************************ 00:20:53.348 END TEST dma 00:20:53.348 ************************************ 00:20:53.348 00:02:53 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:53.348 00:02:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:53.348 00:02:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:53.348 00:02:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:53.348 ************************************ 00:20:53.348 START TEST nvmf_identify 00:20:53.348 ************************************ 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:53.348 * Looking for test storage... 00:20:53.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.348 00:02:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:59.912 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:59.912 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:59.912 Found net devices under 0000:af:00.0: cvl_0_0 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:59.912 Found net devices under 0000:af:00.1: cvl_0_1 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.912 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:00.171 00:21:00.171 --- 10.0.0.2 ping statistics --- 00:21:00.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.171 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:00.171 00:21:00.171 --- 10.0.0.1 ping statistics --- 00:21:00.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.171 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3648143 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3648143 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3648143 ']' 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.171 00:03:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.171 [2024-05-15 00:03:00.743956] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:00.171 [2024-05-15 00:03:00.744007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.429 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.429 [2024-05-15 00:03:00.819196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.429 [2024-05-15 00:03:00.895284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.429 [2024-05-15 00:03:00.895324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.429 [2024-05-15 00:03:00.895333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.429 [2024-05-15 00:03:00.895342] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.429 [2024-05-15 00:03:00.895365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.429 [2024-05-15 00:03:00.895409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.429 [2024-05-15 00:03:00.895501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.429 [2024-05-15 00:03:00.895585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.429 [2024-05-15 00:03:00.895587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.997 [2024-05-15 00:03:01.563904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.997 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 Malloc0 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 [2024-05-15 00:03:01.662349] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:01.257 [2024-05-15 00:03:01.662611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.257 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.257 [ 00:21:01.257 { 00:21:01.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:01.257 "subtype": "Discovery", 00:21:01.257 "listen_addresses": [ 00:21:01.257 { 00:21:01.257 "trtype": "TCP", 00:21:01.257 "adrfam": "IPv4", 00:21:01.257 "traddr": "10.0.0.2", 00:21:01.257 "trsvcid": "4420" 00:21:01.257 } 00:21:01.257 ], 00:21:01.257 "allow_any_host": true, 00:21:01.257 "hosts": [] 00:21:01.257 }, 00:21:01.258 { 00:21:01.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.258 "subtype": "NVMe", 00:21:01.258 "listen_addresses": [ 00:21:01.258 { 00:21:01.258 "trtype": "TCP", 00:21:01.258 "adrfam": "IPv4", 00:21:01.258 "traddr": "10.0.0.2", 00:21:01.258 "trsvcid": "4420" 00:21:01.258 } 00:21:01.258 ], 00:21:01.258 "allow_any_host": true, 00:21:01.258 "hosts": [], 00:21:01.258 "serial_number": "SPDK00000000000001", 00:21:01.258 "model_number": "SPDK bdev Controller", 00:21:01.258 "max_namespaces": 32, 00:21:01.258 "min_cntlid": 1, 00:21:01.258 "max_cntlid": 65519, 00:21:01.258 "namespaces": [ 00:21:01.258 { 00:21:01.258 "nsid": 1, 00:21:01.258 "bdev_name": "Malloc0", 00:21:01.258 "name": "Malloc0", 00:21:01.258 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:01.258 "eui64": "ABCDEF0123456789", 00:21:01.258 "uuid": "af942408-a64e-4d7a-9143-31c55f421592" 00:21:01.258 } 00:21:01.258 ] 00:21:01.258 } 00:21:01.258 ] 00:21:01.258 00:03:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.258 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:01.258 [2024-05-15 00:03:01.722005] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:01.258 [2024-05-15 00:03:01.722047] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648395 ] 00:21:01.258 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.258 [2024-05-15 00:03:01.751966] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:01.258 [2024-05-15 00:03:01.752012] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:01.258 [2024-05-15 00:03:01.752018] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:01.258 [2024-05-15 00:03:01.752030] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:01.258 [2024-05-15 00:03:01.752039] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:01.258 [2024-05-15 00:03:01.752549] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:01.258 [2024-05-15 00:03:01.752577] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1af2ca0 0 00:21:01.258 [2024-05-15 00:03:01.770202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:01.258 [2024-05-15 00:03:01.770220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:01.258 [2024-05-15 00:03:01.770230] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:01.258 [2024-05-15 00:03:01.770235] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:01.258 [2024-05-15 00:03:01.770276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.770285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.770290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.770304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:01.258 [2024-05-15 00:03:01.770323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.778203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.778214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.778218] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778223] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.778236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:01.258 [2024-05-15 00:03:01.778243] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:01.258 [2024-05-15 00:03:01.778249] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:01.258 [2024-05-15 00:03:01.778262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.778281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.778295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.778457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.778464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.778469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778474] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.778481] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:01.258 [2024-05-15 00:03:01.778490] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:01.258 [2024-05-15 00:03:01.778498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.778516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.778529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.778649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.778656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.778661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.778674] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:01.258 [2024-05-15 00:03:01.778683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.778691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778703] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.778711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.778723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.778840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.778848] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.778853] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.778864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.778875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778880] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.778885] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.778892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.778904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.779021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.779028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.779032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.779043] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:01.258 [2024-05-15 00:03:01.779050] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.779059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.779165] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:01.258 [2024-05-15 00:03:01.779171] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.779181] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.779206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.779219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.779348] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.258 [2024-05-15 00:03:01.779355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.258 [2024-05-15 00:03:01.779360] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.258 [2024-05-15 00:03:01.779372] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:01.258 [2024-05-15 00:03:01.779385] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.258 [2024-05-15 00:03:01.779395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.258 [2024-05-15 00:03:01.779402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.258 [2024-05-15 00:03:01.779414] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.258 [2024-05-15 00:03:01.779532] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.259 [2024-05-15 00:03:01.779539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.259 [2024-05-15 00:03:01.779543] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.779548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.259 [2024-05-15 00:03:01.779555] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:01.259 [2024-05-15 00:03:01.779561] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.779570] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:01.259 [2024-05-15 00:03:01.779580] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.779591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.779595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.779603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.259 [2024-05-15 00:03:01.779615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.259 [2024-05-15 00:03:01.779762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.259 [2024-05-15 00:03:01.779770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.259 [2024-05-15 00:03:01.779774] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.779779] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af2ca0): datao=0, datal=4096, cccid=0 00:21:01.259 [2024-05-15 00:03:01.779785] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5c980) on tqpair(0x1af2ca0): expected_datao=0, payload_size=4096 00:21:01.259 [2024-05-15 00:03:01.779791] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.779978] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.779983] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.259 [2024-05-15 00:03:01.820379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.259 [2024-05-15 00:03:01.820384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.259 [2024-05-15 00:03:01.820401] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:01.259 [2024-05-15 00:03:01.820408] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:01.259 [2024-05-15 00:03:01.820414] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:01.259 [2024-05-15 00:03:01.820420] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:01.259 [2024-05-15 00:03:01.820429] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:01.259 [2024-05-15 00:03:01.820435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.820449] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.820460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.259 [2024-05-15 00:03:01.820493] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.259 [2024-05-15 00:03:01.820612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.259 [2024-05-15 00:03:01.820619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.259 [2024-05-15 00:03:01.820624] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5c980) on tqpair=0x1af2ca0 00:21:01.259 [2024-05-15 00:03:01.820637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820642] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.259 [2024-05-15 00:03:01.820661] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.259 [2024-05-15 00:03:01.820683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.259 [2024-05-15 00:03:01.820705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.259 [2024-05-15 00:03:01.820726] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.820739] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:01.259 [2024-05-15 00:03:01.820747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820751] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.820758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.259 [2024-05-15 00:03:01.820774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5c980, cid 0, qid 0 00:21:01.259 [2024-05-15 00:03:01.820780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cae0, cid 1, qid 0 00:21:01.259 [2024-05-15 00:03:01.820786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cc40, cid 2, qid 0 00:21:01.259 [2024-05-15 00:03:01.820791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.259 [2024-05-15 00:03:01.820796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf00, cid 4, qid 0 00:21:01.259 [2024-05-15 00:03:01.820946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.259 [2024-05-15 00:03:01.820953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.259 [2024-05-15 00:03:01.820958] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820962] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cf00) on tqpair=0x1af2ca0 00:21:01.259 [2024-05-15 00:03:01.820970] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:01.259 [2024-05-15 00:03:01.820976] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:01.259 [2024-05-15 00:03:01.820988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.820993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af2ca0) 00:21:01.259 [2024-05-15 00:03:01.821000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.259 [2024-05-15 00:03:01.821012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf00, cid 4, qid 0 00:21:01.259 [2024-05-15 00:03:01.821143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.259 [2024-05-15 00:03:01.821150] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.259 [2024-05-15 00:03:01.821155] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.821159] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af2ca0): datao=0, datal=4096, cccid=4 00:21:01.259 [2024-05-15 00:03:01.821165] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5cf00) on tqpair(0x1af2ca0): expected_datao=0, payload_size=4096 00:21:01.259 [2024-05-15 00:03:01.821171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.821178] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.821183] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.259 [2024-05-15 00:03:01.821398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.260 [2024-05-15 00:03:01.821405] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.260 [2024-05-15 00:03:01.821409] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cf00) on tqpair=0x1af2ca0 00:21:01.260 [2024-05-15 00:03:01.821430] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:01.260 [2024-05-15 00:03:01.821456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af2ca0) 00:21:01.260 [2024-05-15 00:03:01.821469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.260 [2024-05-15 00:03:01.821476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1af2ca0) 00:21:01.260 [2024-05-15 00:03:01.821495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.260 [2024-05-15 00:03:01.821512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf00, cid 4, qid 0 00:21:01.260 [2024-05-15 00:03:01.821517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d060, cid 5, qid 0 00:21:01.260 [2024-05-15 00:03:01.821668] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.260 [2024-05-15 00:03:01.821676] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.260 [2024-05-15 00:03:01.821680] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821685] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af2ca0): datao=0, datal=1024, cccid=4 00:21:01.260 [2024-05-15 00:03:01.821691] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5cf00) on tqpair(0x1af2ca0): expected_datao=0, payload_size=1024 00:21:01.260 [2024-05-15 00:03:01.821696] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821703] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821708] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.260 [2024-05-15 00:03:01.821720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.260 [2024-05-15 00:03:01.821725] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.260 [2024-05-15 00:03:01.821729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d060) on tqpair=0x1af2ca0 00:21:01.521 [2024-05-15 00:03:01.865198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.521 [2024-05-15 00:03:01.865209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.521 [2024-05-15 00:03:01.865214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cf00) on tqpair=0x1af2ca0 00:21:01.521 [2024-05-15 00:03:01.865233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af2ca0) 00:21:01.521 [2024-05-15 00:03:01.865245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.521 [2024-05-15 00:03:01.865266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf00, cid 4, qid 0 00:21:01.521 [2024-05-15 00:03:01.865415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.521 [2024-05-15 00:03:01.865423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.521 [2024-05-15 00:03:01.865428] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865433] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af2ca0): datao=0, datal=3072, cccid=4 00:21:01.521 [2024-05-15 00:03:01.865438] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5cf00) on tqpair(0x1af2ca0): expected_datao=0, payload_size=3072 00:21:01.521 [2024-05-15 00:03:01.865444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865451] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865456] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.521 [2024-05-15 00:03:01.865689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.521 [2024-05-15 00:03:01.865693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cf00) on tqpair=0x1af2ca0 00:21:01.521 [2024-05-15 00:03:01.865709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1af2ca0) 00:21:01.521 [2024-05-15 00:03:01.865726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.521 [2024-05-15 00:03:01.865744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf00, cid 4, qid 0 00:21:01.521 [2024-05-15 00:03:01.865873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.521 [2024-05-15 00:03:01.865880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.521 [2024-05-15 00:03:01.865884] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865889] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1af2ca0): datao=0, datal=8, cccid=4 00:21:01.521 [2024-05-15 00:03:01.865895] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5cf00) on tqpair(0x1af2ca0): expected_datao=0, payload_size=8 00:21:01.521 [2024-05-15 00:03:01.865901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865907] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.865912] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.906583] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.521 [2024-05-15 00:03:01.906596] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.521 [2024-05-15 00:03:01.906601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.521 [2024-05-15 00:03:01.906606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cf00) on tqpair=0x1af2ca0 00:21:01.521 ===================================================== 00:21:01.521 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:01.521 ===================================================== 00:21:01.521 Controller Capabilities/Features 00:21:01.521 ================================ 00:21:01.521 Vendor ID: 0000 00:21:01.521 Subsystem Vendor ID: 0000 00:21:01.521 Serial Number: .................... 00:21:01.521 Model Number: ........................................ 00:21:01.521 Firmware Version: 24.05 00:21:01.521 Recommended Arb Burst: 0 00:21:01.521 IEEE OUI Identifier: 00 00 00 00:21:01.521 Multi-path I/O 00:21:01.521 May have multiple subsystem ports: No 00:21:01.521 May have multiple controllers: No 00:21:01.521 Associated with SR-IOV VF: No 00:21:01.521 Max Data Transfer Size: 131072 00:21:01.521 Max Number of Namespaces: 0 00:21:01.521 Max Number of I/O Queues: 1024 00:21:01.521 NVMe Specification Version (VS): 1.3 00:21:01.521 NVMe Specification Version (Identify): 1.3 00:21:01.521 Maximum Queue Entries: 128 00:21:01.521 Contiguous Queues Required: Yes 00:21:01.521 Arbitration Mechanisms Supported 00:21:01.521 Weighted Round Robin: Not Supported 00:21:01.521 Vendor Specific: Not Supported 00:21:01.521 Reset Timeout: 15000 ms 00:21:01.521 Doorbell Stride: 4 bytes 00:21:01.521 NVM Subsystem Reset: Not Supported 00:21:01.521 Command Sets Supported 00:21:01.521 NVM Command Set: Supported 00:21:01.521 Boot Partition: Not Supported 00:21:01.521 Memory Page Size Minimum: 4096 bytes 00:21:01.521 Memory Page Size Maximum: 4096 bytes 00:21:01.521 Persistent Memory Region: Not Supported 00:21:01.521 Optional Asynchronous Events Supported 00:21:01.521 Namespace Attribute Notices: Not Supported 00:21:01.521 Firmware Activation Notices: Not Supported 00:21:01.521 ANA Change Notices: Not Supported 00:21:01.521 PLE Aggregate Log Change Notices: Not Supported 00:21:01.521 LBA Status Info Alert Notices: Not Supported 00:21:01.521 EGE Aggregate Log Change Notices: Not Supported 00:21:01.521 Normal NVM Subsystem Shutdown event: Not Supported 00:21:01.521 Zone Descriptor Change Notices: Not Supported 00:21:01.521 Discovery Log Change Notices: Supported 00:21:01.521 Controller Attributes 00:21:01.521 128-bit Host Identifier: Not Supported 00:21:01.521 Non-Operational Permissive Mode: Not Supported 00:21:01.521 NVM Sets: Not Supported 00:21:01.521 Read Recovery Levels: Not Supported 00:21:01.521 Endurance Groups: Not Supported 00:21:01.521 Predictable Latency Mode: Not Supported 00:21:01.521 Traffic Based Keep ALive: Not Supported 00:21:01.521 Namespace Granularity: Not Supported 00:21:01.521 SQ Associations: Not Supported 00:21:01.521 UUID List: Not Supported 00:21:01.521 Multi-Domain Subsystem: Not Supported 00:21:01.521 Fixed Capacity Management: Not Supported 00:21:01.521 Variable Capacity Management: Not Supported 00:21:01.521 Delete Endurance Group: Not Supported 00:21:01.521 Delete NVM Set: Not Supported 00:21:01.521 Extended LBA Formats Supported: Not Supported 00:21:01.521 Flexible Data Placement Supported: Not Supported 00:21:01.521 00:21:01.521 Controller Memory Buffer Support 00:21:01.521 ================================ 00:21:01.521 Supported: No 00:21:01.521 00:21:01.521 Persistent Memory Region Support 00:21:01.521 ================================ 00:21:01.521 Supported: No 00:21:01.521 00:21:01.521 Admin Command Set Attributes 00:21:01.521 ============================ 00:21:01.521 Security Send/Receive: Not Supported 00:21:01.521 Format NVM: Not Supported 00:21:01.521 Firmware Activate/Download: Not Supported 00:21:01.521 Namespace Management: Not Supported 00:21:01.521 Device Self-Test: Not Supported 00:21:01.521 Directives: Not Supported 00:21:01.521 NVMe-MI: Not Supported 00:21:01.521 Virtualization Management: Not Supported 00:21:01.521 Doorbell Buffer Config: Not Supported 00:21:01.521 Get LBA Status Capability: Not Supported 00:21:01.521 Command & Feature Lockdown Capability: Not Supported 00:21:01.521 Abort Command Limit: 1 00:21:01.521 Async Event Request Limit: 4 00:21:01.521 Number of Firmware Slots: N/A 00:21:01.521 Firmware Slot 1 Read-Only: N/A 00:21:01.521 Firmware Activation Without Reset: N/A 00:21:01.521 Multiple Update Detection Support: N/A 00:21:01.521 Firmware Update Granularity: No Information Provided 00:21:01.521 Per-Namespace SMART Log: No 00:21:01.521 Asymmetric Namespace Access Log Page: Not Supported 00:21:01.521 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:01.521 Command Effects Log Page: Not Supported 00:21:01.521 Get Log Page Extended Data: Supported 00:21:01.521 Telemetry Log Pages: Not Supported 00:21:01.521 Persistent Event Log Pages: Not Supported 00:21:01.521 Supported Log Pages Log Page: May Support 00:21:01.521 Commands Supported & Effects Log Page: Not Supported 00:21:01.521 Feature Identifiers & Effects Log Page:May Support 00:21:01.521 NVMe-MI Commands & Effects Log Page: May Support 00:21:01.521 Data Area 4 for Telemetry Log: Not Supported 00:21:01.521 Error Log Page Entries Supported: 128 00:21:01.521 Keep Alive: Not Supported 00:21:01.521 00:21:01.521 NVM Command Set Attributes 00:21:01.521 ========================== 00:21:01.521 Submission Queue Entry Size 00:21:01.521 Max: 1 00:21:01.521 Min: 1 00:21:01.521 Completion Queue Entry Size 00:21:01.521 Max: 1 00:21:01.521 Min: 1 00:21:01.521 Number of Namespaces: 0 00:21:01.521 Compare Command: Not Supported 00:21:01.521 Write Uncorrectable Command: Not Supported 00:21:01.521 Dataset Management Command: Not Supported 00:21:01.522 Write Zeroes Command: Not Supported 00:21:01.522 Set Features Save Field: Not Supported 00:21:01.522 Reservations: Not Supported 00:21:01.522 Timestamp: Not Supported 00:21:01.522 Copy: Not Supported 00:21:01.522 Volatile Write Cache: Not Present 00:21:01.522 Atomic Write Unit (Normal): 1 00:21:01.522 Atomic Write Unit (PFail): 1 00:21:01.522 Atomic Compare & Write Unit: 1 00:21:01.522 Fused Compare & Write: Supported 00:21:01.522 Scatter-Gather List 00:21:01.522 SGL Command Set: Supported 00:21:01.522 SGL Keyed: Supported 00:21:01.522 SGL Bit Bucket Descriptor: Not Supported 00:21:01.522 SGL Metadata Pointer: Not Supported 00:21:01.522 Oversized SGL: Not Supported 00:21:01.522 SGL Metadata Address: Not Supported 00:21:01.522 SGL Offset: Supported 00:21:01.522 Transport SGL Data Block: Not Supported 00:21:01.522 Replay Protected Memory Block: Not Supported 00:21:01.522 00:21:01.522 Firmware Slot Information 00:21:01.522 ========================= 00:21:01.522 Active slot: 0 00:21:01.522 00:21:01.522 00:21:01.522 Error Log 00:21:01.522 ========= 00:21:01.522 00:21:01.522 Active Namespaces 00:21:01.522 ================= 00:21:01.522 Discovery Log Page 00:21:01.522 ================== 00:21:01.522 Generation Counter: 2 00:21:01.522 Number of Records: 2 00:21:01.522 Record Format: 0 00:21:01.522 00:21:01.522 Discovery Log Entry 0 00:21:01.522 ---------------------- 00:21:01.522 Transport Type: 3 (TCP) 00:21:01.522 Address Family: 1 (IPv4) 00:21:01.522 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:01.522 Entry Flags: 00:21:01.522 Duplicate Returned Information: 1 00:21:01.522 Explicit Persistent Connection Support for Discovery: 1 00:21:01.522 Transport Requirements: 00:21:01.522 Secure Channel: Not Required 00:21:01.522 Port ID: 0 (0x0000) 00:21:01.522 Controller ID: 65535 (0xffff) 00:21:01.522 Admin Max SQ Size: 128 00:21:01.522 Transport Service Identifier: 4420 00:21:01.522 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:01.522 Transport Address: 10.0.0.2 00:21:01.522 Discovery Log Entry 1 00:21:01.522 ---------------------- 00:21:01.522 Transport Type: 3 (TCP) 00:21:01.522 Address Family: 1 (IPv4) 00:21:01.522 Subsystem Type: 2 (NVM Subsystem) 00:21:01.522 Entry Flags: 00:21:01.522 Duplicate Returned Information: 0 00:21:01.522 Explicit Persistent Connection Support for Discovery: 0 00:21:01.522 Transport Requirements: 00:21:01.522 Secure Channel: Not Required 00:21:01.522 Port ID: 0 (0x0000) 00:21:01.522 Controller ID: 65535 (0xffff) 00:21:01.522 Admin Max SQ Size: 128 00:21:01.522 Transport Service Identifier: 4420 00:21:01.522 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:01.522 Transport Address: 10.0.0.2 [2024-05-15 00:03:01.906691] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:01.522 [2024-05-15 00:03:01.906705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.522 [2024-05-15 00:03:01.906713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.522 [2024-05-15 00:03:01.906720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.522 [2024-05-15 00:03:01.906728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.522 [2024-05-15 00:03:01.906737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.906742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.906746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.906754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.906770] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.906934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.906941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.906946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.906951] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.906960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.906965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.906970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.906977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.906994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.907148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.907158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.907162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.907174] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:01.522 [2024-05-15 00:03:01.907180] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:01.522 [2024-05-15 00:03:01.907196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907202] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.907214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.907227] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.907344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.907352] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.907356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907361] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.907374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.907391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.907403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.907521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.907528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.907533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907538] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.907549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907554] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.907566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.907578] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.907696] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.907703] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.907708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907712] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.907724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.907741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.907755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.907871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.907878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.907883] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907888] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.907899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.907909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.907916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.907928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.908049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.908056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.908061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.908065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.522 [2024-05-15 00:03:01.908076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.908081] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.908086] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.522 [2024-05-15 00:03:01.908093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.522 [2024-05-15 00:03:01.908104] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.522 [2024-05-15 00:03:01.908230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.522 [2024-05-15 00:03:01.908238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.522 [2024-05-15 00:03:01.908243] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.522 [2024-05-15 00:03:01.908248] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.908259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.908276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.908288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.908404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.908411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.908415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.908432] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908437] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.908448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.908460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.908576] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.908583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.908587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.908604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.908621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.908632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.908748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.908755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.908759] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.908776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908785] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.908792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.908804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.908920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.908927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.908932] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.908948] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.908958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.908965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.908977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.909098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.909105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.909109] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.909114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.909126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.909131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.909135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.909142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.909154] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.913201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.913212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.913217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.913222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.913234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.913239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.913244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1af2ca0) 00:21:01.523 [2024-05-15 00:03:01.913251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.913264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cda0, cid 3, qid 0 00:21:01.523 [2024-05-15 00:03:01.913610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.913616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.913621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.913626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cda0) on tqpair=0x1af2ca0 00:21:01.523 [2024-05-15 00:03:01.913636] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:01.523 00:21:01.523 00:03:01 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:01.523 [2024-05-15 00:03:01.952469] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:01.523 [2024-05-15 00:03:01.952522] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648503 ] 00:21:01.523 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.523 [2024-05-15 00:03:01.984269] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:01.523 [2024-05-15 00:03:01.984313] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:01.523 [2024-05-15 00:03:01.984319] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:01.523 [2024-05-15 00:03:01.984330] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:01.523 [2024-05-15 00:03:01.984338] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:01.523 [2024-05-15 00:03:01.984698] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:01.523 [2024-05-15 00:03:01.984723] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2477ca0 0 00:21:01.523 [2024-05-15 00:03:01.991201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:01.523 [2024-05-15 00:03:01.991217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:01.523 [2024-05-15 00:03:01.991226] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:01.523 [2024-05-15 00:03:01.991231] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:01.523 [2024-05-15 00:03:01.991265] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.991270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.991275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.523 [2024-05-15 00:03:01.991287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:01.523 [2024-05-15 00:03:01.991306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.523 [2024-05-15 00:03:01.999201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.999210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.999215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.523 [2024-05-15 00:03:01.999233] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:01.523 [2024-05-15 00:03:01.999240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:01.523 [2024-05-15 00:03:01.999246] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:01.523 [2024-05-15 00:03:01.999258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.523 [2024-05-15 00:03:01.999275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.999289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.523 [2024-05-15 00:03:01.999492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.999500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.999504] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.523 [2024-05-15 00:03:01.999516] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:01.523 [2024-05-15 00:03:01.999526] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:01.523 [2024-05-15 00:03:01.999533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999543] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.523 [2024-05-15 00:03:01.999550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.523 [2024-05-15 00:03:01.999563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.523 [2024-05-15 00:03:01.999682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.523 [2024-05-15 00:03:01.999688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.523 [2024-05-15 00:03:01.999693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.523 [2024-05-15 00:03:01.999697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.523 [2024-05-15 00:03:01.999705] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:01.524 [2024-05-15 00:03:01.999714] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:01.999721] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:01.999726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:01.999731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:01.999738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:01.999753] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:01.999871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:01.999878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:01.999882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:01.999887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:01.999894] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:01.999904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:01.999909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:01.999914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:01.999921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:01.999933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.000051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.000058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.000062] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.000074] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:01.524 [2024-05-15 00:03:02.000080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:02.000089] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:02.000197] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:01.524 [2024-05-15 00:03:02.000202] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:02.000211] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.000228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:02.000240] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.000360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.000367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.000372] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000377] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.000383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:01.524 [2024-05-15 00:03:02.000394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.000410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:02.000425] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.000541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.000548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.000552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.000563] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:01.524 [2024-05-15 00:03:02.000569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.000578] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:01.524 [2024-05-15 00:03:02.000588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.000597] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.000609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:02.000622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.000785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.524 [2024-05-15 00:03:02.000792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.524 [2024-05-15 00:03:02.000796] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000801] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=4096, cccid=0 00:21:01.524 [2024-05-15 00:03:02.000807] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e1980) on tqpair(0x2477ca0): expected_datao=0, payload_size=4096 00:21:01.524 [2024-05-15 00:03:02.000813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000984] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.000989] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.044210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.044215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.044229] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:01.524 [2024-05-15 00:03:02.044235] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:01.524 [2024-05-15 00:03:02.044241] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:01.524 [2024-05-15 00:03:02.044245] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:01.524 [2024-05-15 00:03:02.044251] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:01.524 [2024-05-15 00:03:02.044257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.044271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.044283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.524 [2024-05-15 00:03:02.044315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.044520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.044527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.044532] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044537] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1980) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.044545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.524 [2024-05-15 00:03:02.044568] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044577] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.524 [2024-05-15 00:03:02.044590] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.524 [2024-05-15 00:03:02.044612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.524 [2024-05-15 00:03:02.044633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.044646] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:01.524 [2024-05-15 00:03:02.044654] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.524 [2024-05-15 00:03:02.044665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.524 [2024-05-15 00:03:02.044679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1980, cid 0, qid 0 00:21:01.524 [2024-05-15 00:03:02.044685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1ae0, cid 1, qid 0 00:21:01.524 [2024-05-15 00:03:02.044691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1c40, cid 2, qid 0 00:21:01.524 [2024-05-15 00:03:02.044696] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.524 [2024-05-15 00:03:02.044703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.524 [2024-05-15 00:03:02.044855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.524 [2024-05-15 00:03:02.044862] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.524 [2024-05-15 00:03:02.044867] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.524 [2024-05-15 00:03:02.044872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.524 [2024-05-15 00:03:02.044879] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:01.525 [2024-05-15 00:03:02.044885] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.044894] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.044904] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.044911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.044916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.044921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.525 [2024-05-15 00:03:02.044928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:01.525 [2024-05-15 00:03:02.044940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.525 [2024-05-15 00:03:02.045056] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.525 [2024-05-15 00:03:02.045063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.525 [2024-05-15 00:03:02.045067] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.525 [2024-05-15 00:03:02.045117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.045129] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.045137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045142] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.525 [2024-05-15 00:03:02.045149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.525 [2024-05-15 00:03:02.045161] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.525 [2024-05-15 00:03:02.045470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.525 [2024-05-15 00:03:02.045477] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.525 [2024-05-15 00:03:02.045482] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045486] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=4096, cccid=4 00:21:01.525 [2024-05-15 00:03:02.045492] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e1f00) on tqpair(0x2477ca0): expected_datao=0, payload_size=4096 00:21:01.525 [2024-05-15 00:03:02.045498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045505] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045510] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045705] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.525 [2024-05-15 00:03:02.045714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.525 [2024-05-15 00:03:02.045718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.525 [2024-05-15 00:03:02.045740] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:01.525 [2024-05-15 00:03:02.045751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.045762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.045770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.525 [2024-05-15 00:03:02.045781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.525 [2024-05-15 00:03:02.045794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.525 [2024-05-15 00:03:02.045941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.525 [2024-05-15 00:03:02.045949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.525 [2024-05-15 00:03:02.045954] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.045958] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=4096, cccid=4 00:21:01.525 [2024-05-15 00:03:02.045964] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e1f00) on tqpair(0x2477ca0): expected_datao=0, payload_size=4096 00:21:01.525 [2024-05-15 00:03:02.045970] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.046138] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.046143] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.525 [2024-05-15 00:03:02.086393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.525 [2024-05-15 00:03:02.086397] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.525 [2024-05-15 00:03:02.086416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.086427] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:01.525 [2024-05-15 00:03:02.086437] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.525 [2024-05-15 00:03:02.086449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.525 [2024-05-15 00:03:02.086463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.525 [2024-05-15 00:03:02.086589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.525 [2024-05-15 00:03:02.086597] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.525 [2024-05-15 00:03:02.086601] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086606] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=4096, cccid=4 00:21:01.525 [2024-05-15 00:03:02.086612] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e1f00) on tqpair(0x2477ca0): expected_datao=0, payload_size=4096 00:21:01.525 [2024-05-15 00:03:02.086620] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086809] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.525 [2024-05-15 00:03:02.086814] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.790 [2024-05-15 00:03:02.127395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.790 [2024-05-15 00:03:02.127399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127404] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.790 [2024-05-15 00:03:02.127419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127429] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127438] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127445] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127451] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127458] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:01.790 [2024-05-15 00:03:02.127464] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:01.790 [2024-05-15 00:03:02.127470] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:01.790 [2024-05-15 00:03:02.127488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.790 [2024-05-15 00:03:02.127501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.790 [2024-05-15 00:03:02.127508] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127518] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ca0) 00:21:01.790 [2024-05-15 00:03:02.127524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.790 [2024-05-15 00:03:02.127540] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.790 [2024-05-15 00:03:02.127547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2060, cid 5, qid 0 00:21:01.790 [2024-05-15 00:03:02.127679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.790 [2024-05-15 00:03:02.127687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.790 [2024-05-15 00:03:02.127691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.790 [2024-05-15 00:03:02.127704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.790 [2024-05-15 00:03:02.127710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.790 [2024-05-15 00:03:02.127715] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127720] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2060) on tqpair=0x2477ca0 00:21:01.790 [2024-05-15 00:03:02.127732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127737] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ca0) 00:21:01.790 [2024-05-15 00:03:02.127746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.790 [2024-05-15 00:03:02.127758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2060, cid 5, qid 0 00:21:01.790 [2024-05-15 00:03:02.127887] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.790 [2024-05-15 00:03:02.127894] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.790 [2024-05-15 00:03:02.127899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127903] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2060) on tqpair=0x2477ca0 00:21:01.790 [2024-05-15 00:03:02.127915] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.790 [2024-05-15 00:03:02.127920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ca0) 00:21:01.790 [2024-05-15 00:03:02.127926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.790 [2024-05-15 00:03:02.127938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2060, cid 5, qid 0 00:21:01.790 [2024-05-15 00:03:02.128055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.790 [2024-05-15 00:03:02.128062] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.790 [2024-05-15 00:03:02.128066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.128071] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2060) on tqpair=0x2477ca0 00:21:01.791 [2024-05-15 00:03:02.128082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.128087] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ca0) 00:21:01.791 [2024-05-15 00:03:02.128094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.791 [2024-05-15 00:03:02.128105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2060, cid 5, qid 0 00:21:01.791 [2024-05-15 00:03:02.132201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.791 [2024-05-15 00:03:02.132212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.791 [2024-05-15 00:03:02.132217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2060) on tqpair=0x2477ca0 00:21:01.791 [2024-05-15 00:03:02.132239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ca0) 00:21:01.791 [2024-05-15 00:03:02.132252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.791 [2024-05-15 00:03:02.132260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ca0) 00:21:01.791 [2024-05-15 00:03:02.132271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.791 [2024-05-15 00:03:02.132279] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2477ca0) 00:21:01.791 [2024-05-15 00:03:02.132290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.791 [2024-05-15 00:03:02.132301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132306] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477ca0) 00:21:01.791 [2024-05-15 00:03:02.132314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.791 [2024-05-15 00:03:02.132330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2060, cid 5, qid 0 00:21:01.791 [2024-05-15 00:03:02.132336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1f00, cid 4, qid 0 00:21:01.791 [2024-05-15 00:03:02.132341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e21c0, cid 6, qid 0 00:21:01.791 [2024-05-15 00:03:02.132347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2320, cid 7, qid 0 00:21:01.791 [2024-05-15 00:03:02.132733] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.791 [2024-05-15 00:03:02.132743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.791 [2024-05-15 00:03:02.132747] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132752] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=8192, cccid=5 00:21:01.791 [2024-05-15 00:03:02.132758] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e2060) on tqpair(0x2477ca0): expected_datao=0, payload_size=8192 00:21:01.791 [2024-05-15 00:03:02.132763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132771] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132776] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.791 [2024-05-15 00:03:02.132788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.791 [2024-05-15 00:03:02.132792] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132797] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=512, cccid=4 00:21:01.791 [2024-05-15 00:03:02.132802] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e1f00) on tqpair(0x2477ca0): expected_datao=0, payload_size=512 00:21:01.791 [2024-05-15 00:03:02.132808] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132815] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132819] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.791 [2024-05-15 00:03:02.132832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.791 [2024-05-15 00:03:02.132836] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132840] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=512, cccid=6 00:21:01.791 [2024-05-15 00:03:02.132846] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e21c0) on tqpair(0x2477ca0): expected_datao=0, payload_size=512 00:21:01.791 [2024-05-15 00:03:02.132852] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132858] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132863] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:01.791 [2024-05-15 00:03:02.132875] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:01.791 [2024-05-15 00:03:02.132879] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132884] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ca0): datao=0, datal=4096, cccid=7 00:21:01.791 [2024-05-15 00:03:02.132889] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e2320) on tqpair(0x2477ca0): expected_datao=0, payload_size=4096 00:21:01.791 [2024-05-15 00:03:02.132895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132902] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132906] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.132986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.791 [2024-05-15 00:03:02.132993] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.791 [2024-05-15 00:03:02.132997] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.133002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2060) on tqpair=0x2477ca0 00:21:01.791 [2024-05-15 00:03:02.133017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.791 [2024-05-15 00:03:02.133024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.791 [2024-05-15 00:03:02.133028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.133033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1f00) on tqpair=0x2477ca0 00:21:01.791 [2024-05-15 00:03:02.133043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.791 [2024-05-15 00:03:02.133050] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.791 [2024-05-15 00:03:02.133054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.133059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e21c0) on tqpair=0x2477ca0 00:21:01.791 [2024-05-15 00:03:02.133069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.791 [2024-05-15 00:03:02.133076] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.791 [2024-05-15 00:03:02.133080] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.791 [2024-05-15 00:03:02.133085] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2320) on tqpair=0x2477ca0 00:21:01.791 ===================================================== 00:21:01.791 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.791 ===================================================== 00:21:01.791 Controller Capabilities/Features 00:21:01.791 ================================ 00:21:01.791 Vendor ID: 8086 00:21:01.791 Subsystem Vendor ID: 8086 00:21:01.791 Serial Number: SPDK00000000000001 00:21:01.791 Model Number: SPDK bdev Controller 00:21:01.791 Firmware Version: 24.05 00:21:01.791 Recommended Arb Burst: 6 00:21:01.791 IEEE OUI Identifier: e4 d2 5c 00:21:01.791 Multi-path I/O 00:21:01.791 May have multiple subsystem ports: Yes 00:21:01.791 May have multiple controllers: Yes 00:21:01.791 Associated with SR-IOV VF: No 00:21:01.791 Max Data Transfer Size: 131072 00:21:01.791 Max Number of Namespaces: 32 00:21:01.791 Max Number of I/O Queues: 127 00:21:01.792 NVMe Specification Version (VS): 1.3 00:21:01.792 NVMe Specification Version (Identify): 1.3 00:21:01.792 Maximum Queue Entries: 128 00:21:01.792 Contiguous Queues Required: Yes 00:21:01.792 Arbitration Mechanisms Supported 00:21:01.792 Weighted Round Robin: Not Supported 00:21:01.792 Vendor Specific: Not Supported 00:21:01.792 Reset Timeout: 15000 ms 00:21:01.792 Doorbell Stride: 4 bytes 00:21:01.792 NVM Subsystem Reset: Not Supported 00:21:01.792 Command Sets Supported 00:21:01.792 NVM Command Set: Supported 00:21:01.792 Boot Partition: Not Supported 00:21:01.792 Memory Page Size Minimum: 4096 bytes 00:21:01.792 Memory Page Size Maximum: 4096 bytes 00:21:01.792 Persistent Memory Region: Not Supported 00:21:01.792 Optional Asynchronous Events Supported 00:21:01.792 Namespace Attribute Notices: Supported 00:21:01.792 Firmware Activation Notices: Not Supported 00:21:01.792 ANA Change Notices: Not Supported 00:21:01.792 PLE Aggregate Log Change Notices: Not Supported 00:21:01.792 LBA Status Info Alert Notices: Not Supported 00:21:01.792 EGE Aggregate Log Change Notices: Not Supported 00:21:01.792 Normal NVM Subsystem Shutdown event: Not Supported 00:21:01.792 Zone Descriptor Change Notices: Not Supported 00:21:01.792 Discovery Log Change Notices: Not Supported 00:21:01.792 Controller Attributes 00:21:01.792 128-bit Host Identifier: Supported 00:21:01.792 Non-Operational Permissive Mode: Not Supported 00:21:01.792 NVM Sets: Not Supported 00:21:01.792 Read Recovery Levels: Not Supported 00:21:01.792 Endurance Groups: Not Supported 00:21:01.792 Predictable Latency Mode: Not Supported 00:21:01.792 Traffic Based Keep ALive: Not Supported 00:21:01.792 Namespace Granularity: Not Supported 00:21:01.792 SQ Associations: Not Supported 00:21:01.792 UUID List: Not Supported 00:21:01.792 Multi-Domain Subsystem: Not Supported 00:21:01.792 Fixed Capacity Management: Not Supported 00:21:01.792 Variable Capacity Management: Not Supported 00:21:01.792 Delete Endurance Group: Not Supported 00:21:01.792 Delete NVM Set: Not Supported 00:21:01.792 Extended LBA Formats Supported: Not Supported 00:21:01.792 Flexible Data Placement Supported: Not Supported 00:21:01.792 00:21:01.792 Controller Memory Buffer Support 00:21:01.792 ================================ 00:21:01.792 Supported: No 00:21:01.792 00:21:01.792 Persistent Memory Region Support 00:21:01.792 ================================ 00:21:01.792 Supported: No 00:21:01.792 00:21:01.792 Admin Command Set Attributes 00:21:01.792 ============================ 00:21:01.792 Security Send/Receive: Not Supported 00:21:01.792 Format NVM: Not Supported 00:21:01.792 Firmware Activate/Download: Not Supported 00:21:01.792 Namespace Management: Not Supported 00:21:01.792 Device Self-Test: Not Supported 00:21:01.792 Directives: Not Supported 00:21:01.792 NVMe-MI: Not Supported 00:21:01.792 Virtualization Management: Not Supported 00:21:01.792 Doorbell Buffer Config: Not Supported 00:21:01.792 Get LBA Status Capability: Not Supported 00:21:01.792 Command & Feature Lockdown Capability: Not Supported 00:21:01.792 Abort Command Limit: 4 00:21:01.792 Async Event Request Limit: 4 00:21:01.792 Number of Firmware Slots: N/A 00:21:01.792 Firmware Slot 1 Read-Only: N/A 00:21:01.792 Firmware Activation Without Reset: N/A 00:21:01.792 Multiple Update Detection Support: N/A 00:21:01.792 Firmware Update Granularity: No Information Provided 00:21:01.792 Per-Namespace SMART Log: No 00:21:01.792 Asymmetric Namespace Access Log Page: Not Supported 00:21:01.792 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:01.792 Command Effects Log Page: Supported 00:21:01.792 Get Log Page Extended Data: Supported 00:21:01.792 Telemetry Log Pages: Not Supported 00:21:01.792 Persistent Event Log Pages: Not Supported 00:21:01.792 Supported Log Pages Log Page: May Support 00:21:01.792 Commands Supported & Effects Log Page: Not Supported 00:21:01.792 Feature Identifiers & Effects Log Page:May Support 00:21:01.792 NVMe-MI Commands & Effects Log Page: May Support 00:21:01.792 Data Area 4 for Telemetry Log: Not Supported 00:21:01.792 Error Log Page Entries Supported: 128 00:21:01.792 Keep Alive: Supported 00:21:01.792 Keep Alive Granularity: 10000 ms 00:21:01.792 00:21:01.792 NVM Command Set Attributes 00:21:01.792 ========================== 00:21:01.792 Submission Queue Entry Size 00:21:01.792 Max: 64 00:21:01.792 Min: 64 00:21:01.792 Completion Queue Entry Size 00:21:01.792 Max: 16 00:21:01.792 Min: 16 00:21:01.792 Number of Namespaces: 32 00:21:01.792 Compare Command: Supported 00:21:01.792 Write Uncorrectable Command: Not Supported 00:21:01.792 Dataset Management Command: Supported 00:21:01.792 Write Zeroes Command: Supported 00:21:01.792 Set Features Save Field: Not Supported 00:21:01.792 Reservations: Supported 00:21:01.792 Timestamp: Not Supported 00:21:01.792 Copy: Supported 00:21:01.792 Volatile Write Cache: Present 00:21:01.792 Atomic Write Unit (Normal): 1 00:21:01.792 Atomic Write Unit (PFail): 1 00:21:01.792 Atomic Compare & Write Unit: 1 00:21:01.792 Fused Compare & Write: Supported 00:21:01.792 Scatter-Gather List 00:21:01.792 SGL Command Set: Supported 00:21:01.792 SGL Keyed: Supported 00:21:01.792 SGL Bit Bucket Descriptor: Not Supported 00:21:01.792 SGL Metadata Pointer: Not Supported 00:21:01.792 Oversized SGL: Not Supported 00:21:01.792 SGL Metadata Address: Not Supported 00:21:01.792 SGL Offset: Supported 00:21:01.792 Transport SGL Data Block: Not Supported 00:21:01.792 Replay Protected Memory Block: Not Supported 00:21:01.792 00:21:01.792 Firmware Slot Information 00:21:01.792 ========================= 00:21:01.792 Active slot: 1 00:21:01.792 Slot 1 Firmware Revision: 24.05 00:21:01.792 00:21:01.792 00:21:01.792 Commands Supported and Effects 00:21:01.792 ============================== 00:21:01.792 Admin Commands 00:21:01.792 -------------- 00:21:01.792 Get Log Page (02h): Supported 00:21:01.792 Identify (06h): Supported 00:21:01.792 Abort (08h): Supported 00:21:01.792 Set Features (09h): Supported 00:21:01.792 Get Features (0Ah): Supported 00:21:01.792 Asynchronous Event Request (0Ch): Supported 00:21:01.792 Keep Alive (18h): Supported 00:21:01.792 I/O Commands 00:21:01.792 ------------ 00:21:01.792 Flush (00h): Supported LBA-Change 00:21:01.792 Write (01h): Supported LBA-Change 00:21:01.792 Read (02h): Supported 00:21:01.792 Compare (05h): Supported 00:21:01.792 Write Zeroes (08h): Supported LBA-Change 00:21:01.792 Dataset Management (09h): Supported LBA-Change 00:21:01.792 Copy (19h): Supported LBA-Change 00:21:01.792 Unknown (79h): Supported LBA-Change 00:21:01.792 Unknown (7Ah): Supported 00:21:01.792 00:21:01.792 Error Log 00:21:01.792 ========= 00:21:01.792 00:21:01.792 Arbitration 00:21:01.792 =========== 00:21:01.792 Arbitration Burst: 1 00:21:01.792 00:21:01.792 Power Management 00:21:01.792 ================ 00:21:01.792 Number of Power States: 1 00:21:01.792 Current Power State: Power State #0 00:21:01.792 Power State #0: 00:21:01.792 Max Power: 0.00 W 00:21:01.792 Non-Operational State: Operational 00:21:01.792 Entry Latency: Not Reported 00:21:01.792 Exit Latency: Not Reported 00:21:01.792 Relative Read Throughput: 0 00:21:01.792 Relative Read Latency: 0 00:21:01.792 Relative Write Throughput: 0 00:21:01.792 Relative Write Latency: 0 00:21:01.792 Idle Power: Not Reported 00:21:01.792 Active Power: Not Reported 00:21:01.792 Non-Operational Permissive Mode: Not Supported 00:21:01.792 00:21:01.792 Health Information 00:21:01.792 ================== 00:21:01.792 Critical Warnings: 00:21:01.793 Available Spare Space: OK 00:21:01.793 Temperature: OK 00:21:01.793 Device Reliability: OK 00:21:01.793 Read Only: No 00:21:01.793 Volatile Memory Backup: OK 00:21:01.793 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:01.793 Temperature Threshold: [2024-05-15 00:03:02.133178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.133183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.133198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.133213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e2320, cid 7, qid 0 00:21:01.793 [2024-05-15 00:03:02.137199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.137211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.137216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e2320) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.137259] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:01.793 [2024-05-15 00:03:02.137272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.793 [2024-05-15 00:03:02.137280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.793 [2024-05-15 00:03:02.137287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.793 [2024-05-15 00:03:02.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.793 [2024-05-15 00:03:02.137303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137308] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.137320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.137336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.137538] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.137545] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.137552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.137566] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137575] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.137582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.137598] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.137729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.137736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.137740] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137745] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.137751] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:01.793 [2024-05-15 00:03:02.137757] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:01.793 [2024-05-15 00:03:02.137768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.137784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.137796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.137914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.137921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.137925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.137942] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.137951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.137958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.137970] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.138085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.138092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.138096] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.138112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.138129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.138140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.138421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.138428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.138432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.138448] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138458] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.138465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.138477] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.138599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.138607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.138611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.138627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138632] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.138644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.138655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.138773] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.138780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.138784] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138789] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.138799] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138804] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.138816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.793 [2024-05-15 00:03:02.138827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.793 [2024-05-15 00:03:02.138941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.793 [2024-05-15 00:03:02.138948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.793 [2024-05-15 00:03:02.138953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.793 [2024-05-15 00:03:02.138969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.793 [2024-05-15 00:03:02.138978] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.793 [2024-05-15 00:03:02.138985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.138996] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.139139] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.139156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.139167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139295] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.139316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.139332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.139344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139473] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.139484] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139489] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.139500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.139512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139642] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139646] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.139658] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.139674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.139685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139814] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.139830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.139847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.139858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.139974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.139981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.139986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.139990] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140006] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.140018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.140029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.140144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.140151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.140155] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140176] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140181] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.140188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.140205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.140325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.140331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.140336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140341] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.140368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.140379] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.140495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.140502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.140509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.140542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.140553] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.140670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.140677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.140681] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.794 [2024-05-15 00:03:02.140714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.794 [2024-05-15 00:03:02.140725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.794 [2024-05-15 00:03:02.140849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.794 [2024-05-15 00:03:02.140855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.794 [2024-05-15 00:03:02.140860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140865] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.794 [2024-05-15 00:03:02.140876] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.794 [2024-05-15 00:03:02.140881] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.140885] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.140892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.140903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141372] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141388] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141399] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141404] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141578] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141709] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141720] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141725] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.141877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.141884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.141888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141893] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.141907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.141916] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.141923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.141934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.142053] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.142059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.795 [2024-05-15 00:03:02.142064] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.142068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.795 [2024-05-15 00:03:02.142080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.142085] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.795 [2024-05-15 00:03:02.142089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.795 [2024-05-15 00:03:02.142096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.795 [2024-05-15 00:03:02.142107] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.795 [2024-05-15 00:03:02.142229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.795 [2024-05-15 00:03:02.142236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.142241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142246] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.142257] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.142274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.142286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.142405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.142412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.142416] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.142433] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142438] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142442] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.142449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.142460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.142576] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.142583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.142587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.142606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.142622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.142634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.142755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.142761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.142766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.142782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.142798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.142810] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.142925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.142932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.142936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.142951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.142961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.142967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.142979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.143094] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.143101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.143105] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.143110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.143122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.143126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.143131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.143138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.143149] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.147198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.147208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.147212] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.147217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.147229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.147239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.147244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ca0) 00:21:01.796 [2024-05-15 00:03:02.147251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.796 [2024-05-15 00:03:02.147264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e1da0, cid 3, qid 0 00:21:01.796 [2024-05-15 00:03:02.147468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:01.796 [2024-05-15 00:03:02.147475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:01.796 [2024-05-15 00:03:02.147480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:01.796 [2024-05-15 00:03:02.147485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e1da0) on tqpair=0x2477ca0 00:21:01.796 [2024-05-15 00:03:02.147495] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 9 milliseconds 00:21:01.796 0 Kelvin (-273 Celsius) 00:21:01.796 Available Spare: 0% 00:21:01.796 Available Spare Threshold: 0% 00:21:01.796 Life Percentage Used: 0% 00:21:01.796 Data Units Read: 0 00:21:01.796 Data Units Written: 0 00:21:01.796 Host Read Commands: 0 00:21:01.796 Host Write Commands: 0 00:21:01.796 Controller Busy Time: 0 minutes 00:21:01.796 Power Cycles: 0 00:21:01.796 Power On Hours: 0 hours 00:21:01.796 Unsafe Shutdowns: 0 00:21:01.796 Unrecoverable Media Errors: 0 00:21:01.796 Lifetime Error Log Entries: 0 00:21:01.796 Warning Temperature Time: 0 minutes 00:21:01.796 Critical Temperature Time: 0 minutes 00:21:01.796 00:21:01.796 Number of Queues 00:21:01.796 ================ 00:21:01.796 Number of I/O Submission Queues: 127 00:21:01.796 Number of I/O Completion Queues: 127 00:21:01.796 00:21:01.796 Active Namespaces 00:21:01.796 ================= 00:21:01.796 Namespace ID:1 00:21:01.796 Error Recovery Timeout: Unlimited 00:21:01.796 Command Set Identifier: NVM (00h) 00:21:01.796 Deallocate: Supported 00:21:01.796 Deallocated/Unwritten Error: Not Supported 00:21:01.796 Deallocated Read Value: Unknown 00:21:01.796 Deallocate in Write Zeroes: Not Supported 00:21:01.796 Deallocated Guard Field: 0xFFFF 00:21:01.796 Flush: Supported 00:21:01.796 Reservation: Supported 00:21:01.796 Namespace Sharing Capabilities: Multiple Controllers 00:21:01.796 Size (in LBAs): 131072 (0GiB) 00:21:01.796 Capacity (in LBAs): 131072 (0GiB) 00:21:01.796 Utilization (in LBAs): 131072 (0GiB) 00:21:01.796 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:01.796 EUI64: ABCDEF0123456789 00:21:01.796 UUID: af942408-a64e-4d7a-9143-31c55f421592 00:21:01.796 Thin Provisioning: Not Supported 00:21:01.796 Per-NS Atomic Units: Yes 00:21:01.796 Atomic Boundary Size (Normal): 0 00:21:01.796 Atomic Boundary Size (PFail): 0 00:21:01.796 Atomic Boundary Offset: 0 00:21:01.796 Maximum Single Source Range Length: 65535 00:21:01.796 Maximum Copy Length: 65535 00:21:01.796 Maximum Source Range Count: 1 00:21:01.797 NGUID/EUI64 Never Reused: No 00:21:01.797 Namespace Write Protected: No 00:21:01.797 Number of LBA Formats: 1 00:21:01.797 Current LBA Format: LBA Format #00 00:21:01.797 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:01.797 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.797 rmmod nvme_tcp 00:21:01.797 rmmod nvme_fabrics 00:21:01.797 rmmod nvme_keyring 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3648143 ']' 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3648143 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3648143 ']' 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3648143 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3648143 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3648143' 00:21:01.797 killing process with pid 3648143 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3648143 00:21:01.797 [2024-05-15 00:03:02.323879] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:01.797 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3648143 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.056 00:03:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.595 00:03:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.595 00:21:04.595 real 0m10.944s 00:21:04.595 user 0m8.331s 00:21:04.595 sys 0m5.811s 00:21:04.595 00:03:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:04.595 00:03:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.595 ************************************ 00:21:04.595 END TEST nvmf_identify 00:21:04.596 ************************************ 00:21:04.596 00:03:04 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:04.596 00:03:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:04.596 00:03:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:04.596 00:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 ************************************ 00:21:04.596 START TEST nvmf_perf 00:21:04.596 ************************************ 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:04.596 * Looking for test storage... 00:21:04.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.596 00:03:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.169 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:11.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:11.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:11.170 Found net devices under 0000:af:00.0: cvl_0_0 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:11.170 Found net devices under 0000:af:00.1: cvl_0_1 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:11.170 00:21:11.170 --- 10.0.0.2 ping statistics --- 00:21:11.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.170 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:21:11.170 00:21:11.170 --- 10.0.0.1 ping statistics --- 00:21:11.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.170 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3652610 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3652610 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3652610 ']' 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:11.170 00:03:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.170 [2024-05-15 00:03:11.467679] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:11.170 [2024-05-15 00:03:11.467736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.170 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.170 [2024-05-15 00:03:11.542161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.170 [2024-05-15 00:03:11.620362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.170 [2024-05-15 00:03:11.620394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.170 [2024-05-15 00:03:11.620404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.170 [2024-05-15 00:03:11.620412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.170 [2024-05-15 00:03:11.620419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.170 [2024-05-15 00:03:11.620468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.170 [2024-05-15 00:03:11.620562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.170 [2024-05-15 00:03:11.620647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.170 [2024-05-15 00:03:11.620648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:11.738 00:03:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:15.042 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:15.042 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:15.042 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:21:15.042 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:15.301 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:15.301 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:21:15.301 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:15.301 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:15.301 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.301 [2024-05-15 00:03:15.892000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.560 00:03:15 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.560 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:15.560 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:15.819 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:15.819 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:16.078 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.078 [2024-05-15 00:03:16.627807] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:16.078 [2024-05-15 00:03:16.628064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.078 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:16.337 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:21:16.337 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:21:16.337 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:16.337 00:03:16 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:21:17.716 Initializing NVMe Controllers 00:21:17.716 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:21:17.716 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:21:17.716 Initialization complete. Launching workers. 00:21:17.716 ======================================================== 00:21:17.716 Latency(us) 00:21:17.716 Device Information : IOPS MiB/s Average min max 00:21:17.716 PCIE (0000:d8:00.0) NSID 1 from core 0: 102490.80 400.35 311.85 34.06 5219.16 00:21:17.716 ======================================================== 00:21:17.716 Total : 102490.80 400.35 311.85 34.06 5219.16 00:21:17.716 00:21:17.716 00:03:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:17.716 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.097 Initializing NVMe Controllers 00:21:19.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.097 Initialization complete. Launching workers. 00:21:19.097 ======================================================== 00:21:19.097 Latency(us) 00:21:19.097 Device Information : IOPS MiB/s Average min max 00:21:19.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.69 0.27 14941.11 436.31 45497.67 00:21:19.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.70 0.26 15824.63 7001.63 49860.03 00:21:19.097 ======================================================== 00:21:19.097 Total : 134.39 0.52 15373.05 436.31 49860.03 00:21:19.097 00:21:19.097 00:03:19 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.097 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.476 Initializing NVMe Controllers 00:21:20.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:20.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:20.476 Initialization complete. Launching workers. 00:21:20.476 ======================================================== 00:21:20.476 Latency(us) 00:21:20.476 Device Information : IOPS MiB/s Average min max 00:21:20.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8310.55 32.46 3850.16 780.98 9492.93 00:21:20.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3836.25 14.99 8430.42 4210.79 18359.14 00:21:20.476 ======================================================== 00:21:20.476 Total : 12146.80 47.45 5296.72 780.98 18359.14 00:21:20.476 00:21:20.476 00:03:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:20.476 00:03:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:20.476 00:03:20 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:20.476 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.023 Initializing NVMe Controllers 00:21:23.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.023 Controller IO queue size 128, less than required. 00:21:23.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.023 Controller IO queue size 128, less than required. 00:21:23.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.023 Initialization complete. Launching workers. 00:21:23.023 ======================================================== 00:21:23.023 Latency(us) 00:21:23.023 Device Information : IOPS MiB/s Average min max 00:21:23.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.14 242.79 137679.73 72347.56 194234.95 00:21:23.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.28 147.07 223044.85 61444.70 324991.40 00:21:23.023 ======================================================== 00:21:23.023 Total : 1559.43 389.86 169883.18 61444.70 324991.40 00:21:23.023 00:21:23.023 00:03:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:23.023 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.023 No valid NVMe controllers or AIO or URING devices found 00:21:23.023 Initializing NVMe Controllers 00:21:23.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.023 Controller IO queue size 128, less than required. 00:21:23.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:23.023 Controller IO queue size 128, less than required. 00:21:23.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.023 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:23.023 WARNING: Some requested NVMe devices were skipped 00:21:23.288 00:03:23 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:23.288 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.824 Initializing NVMe Controllers 00:21:25.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.824 Controller IO queue size 128, less than required. 00:21:25.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.824 Controller IO queue size 128, less than required. 00:21:25.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:25.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:25.824 Initialization complete. Launching workers. 00:21:25.824 00:21:25.824 ==================== 00:21:25.824 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:25.824 TCP transport: 00:21:25.824 polls: 50460 00:21:25.824 idle_polls: 16712 00:21:25.824 sock_completions: 33748 00:21:25.824 nvme_completions: 3739 00:21:25.824 submitted_requests: 5558 00:21:25.825 queued_requests: 1 00:21:25.825 00:21:25.825 ==================== 00:21:25.825 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:25.825 TCP transport: 00:21:25.825 polls: 45514 00:21:25.825 idle_polls: 12809 00:21:25.825 sock_completions: 32705 00:21:25.825 nvme_completions: 3775 00:21:25.825 submitted_requests: 5620 00:21:25.825 queued_requests: 1 00:21:25.825 ======================================================== 00:21:25.825 Latency(us) 00:21:25.825 Device Information : IOPS MiB/s Average min max 00:21:25.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 934.50 233.62 142524.37 74275.29 234333.31 00:21:25.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 943.50 235.87 137732.82 67528.41 215618.90 00:21:25.825 ======================================================== 00:21:25.825 Total : 1877.99 469.50 140117.12 67528.41 234333.31 00:21:25.825 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.825 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.825 rmmod nvme_tcp 00:21:25.825 rmmod nvme_fabrics 00:21:26.085 rmmod nvme_keyring 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3652610 ']' 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3652610 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3652610 ']' 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3652610 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3652610 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3652610' 00:21:26.085 killing process with pid 3652610 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3652610 00:21:26.085 [2024-05-15 00:03:26.493548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:26.085 00:03:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3652610 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.989 00:03:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.525 00:03:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.525 00:21:30.525 real 0m25.877s 00:21:30.525 user 1m7.406s 00:21:30.525 sys 0m8.510s 00:21:30.525 00:03:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:30.525 00:03:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.525 ************************************ 00:21:30.525 END TEST nvmf_perf 00:21:30.525 ************************************ 00:21:30.525 00:03:30 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:30.525 00:03:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:30.525 00:03:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:30.525 00:03:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.525 ************************************ 00:21:30.525 START TEST nvmf_fio_host 00:21:30.525 ************************************ 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:30.525 * Looking for test storage... 00:21:30.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.525 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.526 00:03:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:37.115 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:37.115 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:37.115 Found net devices under 0000:af:00.0: cvl_0_0 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:37.115 Found net devices under 0000:af:00.1: cvl_0_1 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.115 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:21:37.116 00:21:37.116 --- 10.0.0.2 ping statistics --- 00:21:37.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.116 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:21:37.116 00:21:37.116 --- 10.0.0.1 ping statistics --- 00:21:37.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.116 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.116 00:03:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3659041 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3659041 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3659041 ']' 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:37.382 00:03:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.382 [2024-05-15 00:03:37.761453] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:37.383 [2024-05-15 00:03:37.761504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.383 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.383 [2024-05-15 00:03:37.836982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.383 [2024-05-15 00:03:37.911619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.383 [2024-05-15 00:03:37.911656] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.383 [2024-05-15 00:03:37.911671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.383 [2024-05-15 00:03:37.911683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.383 [2024-05-15 00:03:37.911693] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.383 [2024-05-15 00:03:37.911741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.383 [2024-05-15 00:03:37.911762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.383 [2024-05-15 00:03:37.911849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.383 [2024-05-15 00:03:37.911853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 [2024-05-15 00:03:38.582860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 Malloc1 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 [2024-05-15 00:03:38.681294] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:38.320 [2024-05-15 00:03:38.681563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:38.320 00:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.579 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:38.579 fio-3.35 00:21:38.579 Starting 1 thread 00:21:38.579 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.136 00:21:41.136 test: (groupid=0, jobs=1): err= 0: pid=3659460: Wed May 15 00:03:41 2024 00:21:41.136 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2006msec) 00:21:41.136 slat (nsec): min=1506, max=236604, avg=1716.28, stdev=2139.95 00:21:41.136 clat (usec): min=3607, max=16171, avg=6333.93, stdev=1507.52 00:21:41.136 lat (usec): min=3609, max=16172, avg=6335.65, stdev=1507.59 00:21:41.136 clat percentiles (usec): 00:21:41.136 | 1.00th=[ 4228], 5.00th=[ 4817], 10.00th=[ 5145], 20.00th=[ 5473], 00:21:41.136 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6128], 00:21:41.136 | 70.00th=[ 6390], 80.00th=[ 6783], 90.00th=[ 8029], 95.00th=[ 9503], 00:21:41.136 | 99.00th=[12780], 99.50th=[13566], 99.90th=[14746], 99.95th=[15270], 00:21:41.136 | 99.99th=[16188] 00:21:41.136 bw ( KiB/s): min=44784, max=47856, per=99.94%, avg=46552.00, stdev=1302.65, samples=4 00:21:41.136 iops : min=11196, max=11964, avg=11638.00, stdev=325.66, samples=4 00:21:41.136 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(90.6MiB/2006msec); 0 zone resets 00:21:41.136 slat (nsec): min=1550, max=220951, avg=1797.16, stdev=1603.03 00:21:41.136 clat (usec): min=2076, max=10876, avg=4624.28, stdev=823.29 00:21:41.136 lat (usec): min=2078, max=10877, avg=4626.08, stdev=823.44 00:21:41.136 clat percentiles (usec): 00:21:41.136 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3687], 20.00th=[ 4080], 00:21:41.136 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4752], 00:21:41.136 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5932], 00:21:41.136 | 99.00th=[ 7767], 99.50th=[ 8225], 99.90th=[ 9503], 99.95th=[ 9896], 00:21:41.136 | 99.99th=[10814] 00:21:41.136 bw ( KiB/s): min=45448, max=46752, per=100.00%, avg=46300.00, stdev=611.45, samples=4 00:21:41.136 iops : min=11362, max=11688, avg=11575.00, stdev=152.86, samples=4 00:21:41.136 lat (msec) : 4=8.99%, 10=88.96%, 20=2.05% 00:21:41.136 cpu : usr=64.89%, sys=29.18%, ctx=38, majf=0, minf=4 00:21:41.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:41.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:41.136 issued rwts: total=23360,23193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:41.136 00:21:41.136 Run status group 0 (all jobs): 00:21:41.136 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.7MB), run=2006-2006msec 00:21:41.136 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=90.6MiB (95.0MB), run=2006-2006msec 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:41.136 00:03:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:41.393 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:41.393 fio-3.35 00:21:41.393 Starting 1 thread 00:21:41.393 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.915 00:21:43.915 test: (groupid=0, jobs=1): err= 0: pid=3660118: Wed May 15 00:03:44 2024 00:21:43.915 read: IOPS=9825, BW=154MiB/s (161MB/s)(308MiB/2007msec) 00:21:43.915 slat (nsec): min=2405, max=80685, avg=2681.65, stdev=1255.69 00:21:43.915 clat (usec): min=2952, max=50898, avg=7872.93, stdev=3729.23 00:21:43.915 lat (usec): min=2955, max=50901, avg=7875.61, stdev=3729.50 00:21:43.915 clat percentiles (usec): 00:21:43.915 | 1.00th=[ 3720], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5735], 00:21:43.915 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7898], 00:21:43.915 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10290], 95.00th=[12256], 00:21:43.915 | 99.00th=[24249], 99.50th=[26870], 99.90th=[50070], 99.95th=[50070], 00:21:43.915 | 99.99th=[50594] 00:21:43.915 bw ( KiB/s): min=68352, max=90080, per=50.65%, avg=79632.00, stdev=9891.42, samples=4 00:21:43.915 iops : min= 4272, max= 5630, avg=4977.00, stdev=618.21, samples=4 00:21:43.915 write: IOPS=5794, BW=90.5MiB/s (94.9MB/s)(162MiB/1792msec); 0 zone resets 00:21:43.915 slat (usec): min=27, max=385, avg=30.09, stdev= 7.65 00:21:43.915 clat (usec): min=3063, max=53148, avg=8951.45, stdev=4209.07 00:21:43.915 lat (usec): min=3092, max=53177, avg=8981.53, stdev=4211.29 00:21:43.915 clat percentiles (usec): 00:21:43.915 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:21:43.915 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8586], 00:21:43.915 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11863], 00:21:43.915 | 99.00th=[26608], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:21:43.915 | 99.99th=[53216] 00:21:43.915 bw ( KiB/s): min=72032, max=93184, per=89.07%, avg=82576.00, stdev=9246.27, samples=4 00:21:43.915 iops : min= 4502, max= 5824, avg=5161.00, stdev=577.89, samples=4 00:21:43.915 lat (msec) : 4=1.32%, 10=85.94%, 20=10.75%, 50=1.74%, 100=0.25% 00:21:43.915 cpu : usr=79.76%, sys=16.85%, ctx=24, majf=0, minf=1 00:21:43.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:43.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.915 issued rwts: total=19720,10383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.915 00:21:43.915 Run status group 0 (all jobs): 00:21:43.915 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=308MiB (323MB), run=2007-2007msec 00:21:43.915 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=162MiB (170MB), run=1792-1792msec 00:21:43.915 00:03:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.915 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.916 rmmod nvme_tcp 00:21:43.916 rmmod nvme_fabrics 00:21:43.916 rmmod nvme_keyring 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3659041 ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3659041 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3659041 ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3659041 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3659041 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3659041' 00:21:43.916 killing process with pid 3659041 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3659041 00:21:43.916 [2024-05-15 00:03:44.291973] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:43.916 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3659041 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.173 00:03:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.071 00:03:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.071 00:21:46.071 real 0m15.896s 00:21:46.071 user 0m46.856s 00:21:46.071 sys 0m7.547s 00:21:46.071 00:03:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:46.071 00:03:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.071 ************************************ 00:21:46.071 END TEST nvmf_fio_host 00:21:46.071 ************************************ 00:21:46.071 00:03:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:46.071 00:03:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:46.071 00:03:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:46.071 00:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.328 ************************************ 00:21:46.328 START TEST nvmf_failover 00:21:46.328 ************************************ 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:46.328 * Looking for test storage... 00:21:46.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:46.328 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.329 00:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.880 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.880 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.880 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.880 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.880 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:21:53.137 00:21:53.137 --- 10.0.0.2 ping statistics --- 00:21:53.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.137 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:53.137 00:21:53.137 --- 10.0.0.1 ping statistics --- 00:21:53.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.137 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3664079 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3664079 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3664079 ']' 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:53.137 00:03:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.394 [2024-05-15 00:03:53.743040] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:21:53.394 [2024-05-15 00:03:53.743087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.394 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.394 [2024-05-15 00:03:53.814782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.394 [2024-05-15 00:03:53.882495] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.394 [2024-05-15 00:03:53.882535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.394 [2024-05-15 00:03:53.882545] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.394 [2024-05-15 00:03:53.882554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.394 [2024-05-15 00:03:53.882561] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.394 [2024-05-15 00:03:53.882675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.394 [2024-05-15 00:03:53.882746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.394 [2024-05-15 00:03:53.882748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:54.323 [2024-05-15 00:03:54.747686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.323 00:03:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:54.580 Malloc0 00:21:54.580 00:03:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.580 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.837 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.094 [2024-05-15 00:03:55.465346] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:55.094 [2024-05-15 00:03:55.465621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.094 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:55.094 [2024-05-15 00:03:55.642022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:55.094 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.350 [2024-05-15 00:03:55.834692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3664540 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3664540 /var/tmp/bdevperf.sock 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3664540 ']' 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.351 00:03:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:56.280 00:03:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:56.280 00:03:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:56.280 00:03:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.537 NVMe0n1 00:21:56.537 00:03:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.794 00:21:56.794 00:03:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3664698 00:21:56.794 00:03:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.794 00:03:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:57.724 00:03:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.984 [2024-05-15 00:03:58.365487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.365992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.984 [2024-05-15 00:03:58.366159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 [2024-05-15 00:03:58.366285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47f00 is same with the state(5) to be set 00:21:57.985 00:03:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:01.256 00:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.256 00:22:01.256 00:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.513 [2024-05-15 00:04:01.850561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.850992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.513 [2024-05-15 00:04:01.851167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 [2024-05-15 00:04:01.851278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48ac0 is same with the state(5) to be set 00:22:01.514 00:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:04.786 00:04:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.786 [2024-05-15 00:04:05.046318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.786 00:04:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:05.718 00:04:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:05.718 [2024-05-15 00:04:06.237836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.237999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.718 [2024-05-15 00:04:06.238117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 [2024-05-15 00:04:06.238351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f740 is same with the state(5) to be set 00:22:05.719 00:04:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3664698 00:22:12.274 0 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3664540 ']' 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664540' 00:22:12.274 killing process with pid 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3664540 00:22:12.274 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:12.274 [2024-05-15 00:03:55.912785] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:22:12.274 [2024-05-15 00:03:55.912841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664540 ] 00:22:12.274 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.274 [2024-05-15 00:03:55.983459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.274 [2024-05-15 00:03:56.053231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.274 Running I/O for 15 seconds... 00:22:12.274 [2024-05-15 00:03:58.366597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.274 [2024-05-15 00:03:58.366637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.366973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.366987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.275 [2024-05-15 00:03:58.367820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.275 [2024-05-15 00:03:58.367833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.367861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.367888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.367917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.367945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.367973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.367988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.276 [2024-05-15 00:03:58.368886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.276 [2024-05-15 00:03:58.368915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.276 [2024-05-15 00:03:58.368944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.276 [2024-05-15 00:03:58.368976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.368990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.276 [2024-05-15 00:03:58.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.276 [2024-05-15 00:03:58.369018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.276 [2024-05-15 00:03:58.369031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.277 [2024-05-15 00:03:58.369548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.369979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.369994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.370010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.370023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.370038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.370051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.370066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.370078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.277 [2024-05-15 00:03:58.370093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.277 [2024-05-15 00:03:58.370105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:03:58.370279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4c7d0 is same with the state(5) to be set 00:22:12.278 [2024-05-15 00:03:58.370308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.278 [2024-05-15 00:03:58.370319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.278 [2024-05-15 00:03:58.370331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:22:12.278 [2024-05-15 00:03:58.370344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370397] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4c7d0 was disconnected and freed. reset controller. 00:22:12.278 [2024-05-15 00:03:58.370417] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:12.278 [2024-05-15 00:03:58.370450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.278 [2024-05-15 00:03:58.370464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.278 [2024-05-15 00:03:58.370492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.278 [2024-05-15 00:03:58.370520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.278 [2024-05-15 00:03:58.370547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:03:58.370559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.278 [2024-05-15 00:03:58.370589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d590 (9): Bad file descriptor 00:22:12.278 [2024-05-15 00:03:58.373744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.278 [2024-05-15 00:03:58.536189] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.278 [2024-05-15 00:04:01.851551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.278 [2024-05-15 00:04:01.851962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.278 [2024-05-15 00:04:01.851976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.279 [2024-05-15 00:04:01.852480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.279 [2024-05-15 00:04:01.852495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.280 [2024-05-15 00:04:01.852653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.280 [2024-05-15 00:04:01.852670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.852686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.852718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.852977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.852994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.281 [2024-05-15 00:04:01.853492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.281 [2024-05-15 00:04:01.853870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.281 [2024-05-15 00:04:01.853887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.853919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.853935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.853953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.853969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.853986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.282 [2024-05-15 00:04:01.854716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.282 [2024-05-15 00:04:01.854959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.282 [2024-05-15 00:04:01.854972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.854987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.283 [2024-05-15 00:04:01.855464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.283 [2024-05-15 00:04:01.855508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.283 [2024-05-15 00:04:01.855522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:22:12.283 [2024-05-15 00:04:01.855538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855590] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10f6f40 was disconnected and freed. reset controller. 00:22:12.283 [2024-05-15 00:04:01.855606] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:12.283 [2024-05-15 00:04:01.855636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:01.855650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:01.855677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:01.855704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:01.855732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:01.855745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.283 [2024-05-15 00:04:01.855777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d590 (9): Bad file descriptor 00:22:12.283 [2024-05-15 00:04:01.858896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.283 [2024-05-15 00:04:02.013937] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.283 [2024-05-15 00:04:06.238081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:06.238121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:06.238155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:06.238182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.283 [2024-05-15 00:04:06.238251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d590 is same with the state(5) to be set 00:22:12.283 [2024-05-15 00:04:06.238725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.283 [2024-05-15 00:04:06.238964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.283 [2024-05-15 00:04:06.238976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.238992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.284 [2024-05-15 00:04:06.239914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.239971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.239986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.284 [2024-05-15 00:04:06.240141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.284 [2024-05-15 00:04:06.240155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.285 [2024-05-15 00:04:06.240377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.240993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.241006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.241021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.241034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.285 [2024-05-15 00:04:06.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.285 [2024-05-15 00:04:06.241061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.286 [2024-05-15 00:04:06.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.241979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.241993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.286 [2024-05-15 00:04:06.242197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.286 [2024-05-15 00:04:06.242210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.287 [2024-05-15 00:04:06.242409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f6d30 is same with the state(5) to be set 00:22:12.287 [2024-05-15 00:04:06.242439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.287 [2024-05-15 00:04:06.242451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.287 [2024-05-15 00:04:06.242462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23224 len:8 PRP1 0x0 PRP2 0x0 00:22:12.287 [2024-05-15 00:04:06.242476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.287 [2024-05-15 00:04:06.242528] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10f6d30 was disconnected and freed. reset controller. 00:22:12.287 [2024-05-15 00:04:06.242545] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:12.287 [2024-05-15 00:04:06.242558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.287 [2024-05-15 00:04:06.245692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.287 [2024-05-15 00:04:06.245734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d590 (9): Bad file descriptor 00:22:12.287 [2024-05-15 00:04:06.285210] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.287 00:22:12.287 Latency(us) 00:22:12.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.287 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:12.287 Verification LBA range: start 0x0 length 0x4000 00:22:12.287 NVMe0n1 : 15.05 11564.44 45.17 1152.86 0.00 10023.54 1258.29 45508.20 00:22:12.287 =================================================================================================================== 00:22:12.287 Total : 11564.44 45.17 1152.86 0.00 10023.54 1258.29 45508.20 00:22:12.287 Received shutdown signal, test time was about 15.000000 seconds 00:22:12.287 00:22:12.287 Latency(us) 00:22:12.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.287 =================================================================================================================== 00:22:12.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3667302 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3667302 /var/tmp/bdevperf.sock 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3667302 ']' 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.287 00:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:13.214 00:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.214 00:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:13.215 00:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:13.215 [2024-05-15 00:04:13.674153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.215 00:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:13.471 [2024-05-15 00:04:13.842603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:13.471 00:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.727 NVMe0n1 00:22:13.727 00:04:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.983 00:22:13.983 00:04:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.239 00:22:14.239 00:04:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.239 00:04:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:14.496 00:04:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.751 00:04:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:18.026 00:04:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.026 00:04:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:18.026 00:04:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:18.026 00:04:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3668351 00:22:18.026 00:04:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3668351 00:22:18.955 0 00:22:18.955 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:18.955 [2024-05-15 00:04:12.715248] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:22:18.955 [2024-05-15 00:04:12.715302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667302 ] 00:22:18.955 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.955 [2024-05-15 00:04:12.784550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.955 [2024-05-15 00:04:12.848656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.955 [2024-05-15 00:04:15.073108] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:18.955 [2024-05-15 00:04:15.073156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.955 [2024-05-15 00:04:15.073175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.955 [2024-05-15 00:04:15.073196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.955 [2024-05-15 00:04:15.073211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.955 [2024-05-15 00:04:15.073225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.955 [2024-05-15 00:04:15.073240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.955 [2024-05-15 00:04:15.073255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.955 [2024-05-15 00:04:15.073267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.955 [2024-05-15 00:04:15.073282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.955 [2024-05-15 00:04:15.073313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.955 [2024-05-15 00:04:15.073336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x791590 (9): Bad file descriptor 00:22:18.955 [2024-05-15 00:04:15.206407] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:18.955 Running I/O for 1 seconds... 00:22:18.955 00:22:18.955 Latency(us) 00:22:18.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.955 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:18.955 Verification LBA range: start 0x0 length 0x4000 00:22:18.955 NVMe0n1 : 1.00 11305.78 44.16 0.00 0.00 11270.04 2031.62 26528.97 00:22:18.955 =================================================================================================================== 00:22:18.955 Total : 11305.78 44.16 0.00 0.00 11270.04 2031.62 26528.97 00:22:18.955 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.955 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:19.212 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.212 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.212 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:19.470 00:04:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.728 00:04:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3667302 ']' 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3667302' 00:22:23.011 killing process with pid 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3667302 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:23.011 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.269 rmmod nvme_tcp 00:22:23.269 rmmod nvme_fabrics 00:22:23.269 rmmod nvme_keyring 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3664079 ']' 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3664079 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3664079 ']' 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3664079 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.269 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664079 00:22:23.528 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:23.528 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:23.528 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664079' 00:22:23.528 killing process with pid 3664079 00:22:23.528 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3664079 00:22:23.528 [2024-05-15 00:04:23.887466] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:23.528 00:04:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3664079 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.528 00:04:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.061 00:04:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.061 00:22:26.061 real 0m39.504s 00:22:26.061 user 2m1.529s 00:22:26.061 sys 0m9.936s 00:22:26.061 00:04:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.061 00:04:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.061 ************************************ 00:22:26.061 END TEST nvmf_failover 00:22:26.061 ************************************ 00:22:26.061 00:04:26 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.061 00:04:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:26.061 00:04:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.061 00:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.061 ************************************ 00:22:26.061 START TEST nvmf_host_discovery 00:22:26.061 ************************************ 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.061 * Looking for test storage... 00:22:26.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.061 00:04:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:32.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:32.623 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:32.624 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:32.624 Found net devices under 0000:af:00.0: cvl_0_0 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:32.624 Found net devices under 0000:af:00.1: cvl_0_1 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.624 00:04:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:22:32.624 00:22:32.624 --- 10.0.0.2 ping statistics --- 00:22:32.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.624 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:32.624 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:22:32.883 00:22:32.883 --- 10.0.0.1 ping statistics --- 00:22:32.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.883 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3672877 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3672877 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3672877 ']' 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.883 00:04:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.883 [2024-05-15 00:04:33.309907] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:22:32.883 [2024-05-15 00:04:33.309954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.883 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.883 [2024-05-15 00:04:33.383978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.883 [2024-05-15 00:04:33.457984] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.883 [2024-05-15 00:04:33.458019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.883 [2024-05-15 00:04:33.458029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.883 [2024-05-15 00:04:33.458037] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.883 [2024-05-15 00:04:33.458044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.883 [2024-05-15 00:04:33.458065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 [2024-05-15 00:04:34.153893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 [2024-05-15 00:04:34.165879] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:33.816 [2024-05-15 00:04:34.166071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 null0 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 null1 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3673157 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3673157 /tmp/host.sock 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3673157 ']' 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.816 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.816 00:04:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.816 [2024-05-15 00:04:34.244762] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:22:33.816 [2024-05-15 00:04:34.244808] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673157 ] 00:22:33.816 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.816 [2024-05-15 00:04:34.313813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.816 [2024-05-15 00:04:34.389069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:34.751 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.752 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 [2024-05-15 00:04:35.397262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:22:35.011 00:04:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:35.578 [2024-05-15 00:04:36.082456] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.578 [2024-05-15 00:04:36.082479] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.578 [2024-05-15 00:04:36.082493] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.578 [2024-05-15 00:04:36.168736] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:35.836 [2024-05-15 00:04:36.346149] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.836 [2024-05-15 00:04:36.346167] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.094 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.390 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:36.390 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.390 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 [2024-05-15 00:04:36.913518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:36.391 [2024-05-15 00:04:36.913846] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:36.391 [2024-05-15 00:04:36.913868] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.391 00:04:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.672 00:04:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.672 [2024-05-15 00:04:37.002112] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:36.672 00:04:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:36.672 [2024-05-15 00:04:37.147029] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.672 [2024-05-15 00:04:37.147047] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.672 [2024-05-15 00:04:37.147054] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.608 [2024-05-15 00:04:38.173854] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:37.608 [2024-05-15 00:04:38.173876] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:37.608 [2024-05-15 00:04:38.178205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.608 [2024-05-15 00:04:38.178225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.608 [2024-05-15 00:04:38.178242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.608 [2024-05-15 00:04:38.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.608 [2024-05-15 00:04:38.178270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.608 [2024-05-15 00:04:38.178284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.608 [2024-05-15 00:04:38.178298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.608 [2024-05-15 00:04:38.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.608 [2024-05-15 00:04:38.178328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.608 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.608 [2024-05-15 00:04:38.188205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.608 [2024-05-15 00:04:38.198244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.608 [2024-05-15 00:04:38.198645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.608 [2024-05-15 00:04:38.199018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.608 [2024-05-15 00:04:38.199034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.608 [2024-05-15 00:04:38.199050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.608 [2024-05-15 00:04:38.199070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.608 [2024-05-15 00:04:38.199104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.608 [2024-05-15 00:04:38.199118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.608 [2024-05-15 00:04:38.199134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.608 [2024-05-15 00:04:38.199152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.869 [2024-05-15 00:04:38.208306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.869 [2024-05-15 00:04:38.208689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.209050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.209066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.869 [2024-05-15 00:04:38.209080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.869 [2024-05-15 00:04:38.209101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.869 [2024-05-15 00:04:38.209118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.869 [2024-05-15 00:04:38.209130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.869 [2024-05-15 00:04:38.209144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.869 [2024-05-15 00:04:38.209170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.869 [2024-05-15 00:04:38.218366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.869 [2024-05-15 00:04:38.218612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.218967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.218982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.869 [2024-05-15 00:04:38.218996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.869 [2024-05-15 00:04:38.219014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.869 [2024-05-15 00:04:38.219031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.869 [2024-05-15 00:04:38.219043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.869 [2024-05-15 00:04:38.219057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.869 [2024-05-15 00:04:38.219073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:37.869 [2024-05-15 00:04:38.228427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.869 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:37.869 [2024-05-15 00:04:38.228788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.229161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.869 [2024-05-15 00:04:38.229175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.869 [2024-05-15 00:04:38.229189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.869 [2024-05-15 00:04:38.229215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.869 [2024-05-15 00:04:38.229242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.869 [2024-05-15 00:04:38.229256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.869 [2024-05-15 00:04:38.229270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.229287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.870 [2024-05-15 00:04:38.238491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.238886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.239252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.239268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.239282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.239301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.239329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.239344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.239357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.239374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 [2024-05-15 00:04:38.248550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.248993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.249301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.249316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.249331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.249349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.249376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.249393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.249406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.249422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 [2024-05-15 00:04:38.258605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.258909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.259233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.259247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.259261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.259278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.259296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.259308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.259321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.259338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 [2024-05-15 00:04:38.268662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.269034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.269397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.269414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.269428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.269447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.269474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.269487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.269500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.269516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.870 [2024-05-15 00:04:38.278720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.279102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.279474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.279489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.279504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.279521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.279549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.279568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.279581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.279597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:37.870 [2024-05-15 00:04:38.288775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.289200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.289574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.289589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.289603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.289621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.289648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.289662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.289674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.289690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:37.870 [2024-05-15 00:04:38.298832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.870 [2024-05-15 00:04:38.299296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.299615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.870 [2024-05-15 00:04:38.299629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8a130 with addr=10.0.0.2, port=4420 00:22:37.870 [2024-05-15 00:04:38.299643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a130 is same with the state(5) to be set 00:22:37.870 [2024-05-15 00:04:38.299662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8a130 (9): Bad file descriptor 00:22:37.870 [2024-05-15 00:04:38.299690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.870 [2024-05-15 00:04:38.299707] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.870 [2024-05-15 00:04:38.299720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.870 [2024-05-15 00:04:38.299736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.870 [2024-05-15 00:04:38.301686] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:37.870 [2024-05-15 00:04:38.301702] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:37.870 00:04:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.808 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:39.067 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.068 00:04:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 [2024-05-15 00:04:40.613949] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:40.446 [2024-05-15 00:04:40.613968] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:40.446 [2024-05-15 00:04:40.613981] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.446 [2024-05-15 00:04:40.702239] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:40.446 [2024-05-15 00:04:40.809120] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.446 [2024-05-15 00:04:40.809151] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 request: 00:22:40.446 { 00:22:40.446 "name": "nvme", 00:22:40.446 "trtype": "tcp", 00:22:40.446 "traddr": "10.0.0.2", 00:22:40.446 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:40.446 "adrfam": "ipv4", 00:22:40.446 "trsvcid": "8009", 00:22:40.446 "wait_for_attach": true, 00:22:40.446 "method": "bdev_nvme_start_discovery", 00:22:40.446 "req_id": 1 00:22:40.446 } 00:22:40.446 Got JSON-RPC error response 00:22:40.446 response: 00:22:40.446 { 00:22:40.446 "code": -17, 00:22:40.446 "message": "File exists" 00:22:40.446 } 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 request: 00:22:40.446 { 00:22:40.446 "name": "nvme_second", 00:22:40.446 "trtype": "tcp", 00:22:40.446 "traddr": "10.0.0.2", 00:22:40.446 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:40.446 "adrfam": "ipv4", 00:22:40.446 "trsvcid": "8009", 00:22:40.446 "wait_for_attach": true, 00:22:40.446 "method": "bdev_nvme_start_discovery", 00:22:40.446 "req_id": 1 00:22:40.446 } 00:22:40.446 Got JSON-RPC error response 00:22:40.446 response: 00:22:40.446 { 00:22:40.446 "code": -17, 00:22:40.446 "message": "File exists" 00:22:40.446 } 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:40.447 00:04:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.447 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.705 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.706 00:04:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.644 [2024-05-15 00:04:42.061247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.644 [2024-05-15 00:04:42.061721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.644 [2024-05-15 00:04:42.061744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f980 with addr=10.0.0.2, port=8010 00:22:41.644 [2024-05-15 00:04:42.061763] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:41.644 [2024-05-15 00:04:42.061775] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:41.644 [2024-05-15 00:04:42.061786] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:42.582 [2024-05-15 00:04:43.063711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.583 [2024-05-15 00:04:43.064103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.583 [2024-05-15 00:04:43.064125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea4840 with addr=10.0.0.2, port=8010 00:22:42.583 [2024-05-15 00:04:43.064143] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:42.583 [2024-05-15 00:04:43.064154] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:42.583 [2024-05-15 00:04:43.064165] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:43.519 [2024-05-15 00:04:44.065639] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:43.519 request: 00:22:43.519 { 00:22:43.519 "name": "nvme_second", 00:22:43.519 "trtype": "tcp", 00:22:43.519 "traddr": "10.0.0.2", 00:22:43.519 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.519 "adrfam": "ipv4", 00:22:43.519 "trsvcid": "8010", 00:22:43.519 "attach_timeout_ms": 3000, 00:22:43.519 "method": "bdev_nvme_start_discovery", 00:22:43.519 "req_id": 1 00:22:43.519 } 00:22:43.519 Got JSON-RPC error response 00:22:43.519 response: 00:22:43.519 { 00:22:43.519 "code": -110, 00:22:43.519 "message": "Connection timed out" 00:22:43.519 } 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.519 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3673157 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.778 rmmod nvme_tcp 00:22:43.778 rmmod nvme_fabrics 00:22:43.778 rmmod nvme_keyring 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3672877 ']' 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3672877 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3672877 ']' 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3672877 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672877 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672877' 00:22:43.778 killing process with pid 3672877 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3672877 00:22:43.778 [2024-05-15 00:04:44.228751] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:43.778 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3672877 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.037 00:04:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.941 00:04:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.941 00:22:45.941 real 0m20.207s 00:22:45.941 user 0m24.220s 00:22:45.941 sys 0m7.176s 00:22:45.941 00:04:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:45.941 00:04:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.941 ************************************ 00:22:45.941 END TEST nvmf_host_discovery 00:22:45.941 ************************************ 00:22:45.941 00:04:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:45.941 00:04:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:45.941 00:04:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:45.941 00:04:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.200 ************************************ 00:22:46.200 START TEST nvmf_host_multipath_status 00:22:46.200 ************************************ 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:46.200 * Looking for test storage... 00:22:46.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.200 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.201 00:04:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.773 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:52.774 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:52.774 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:52.774 Found net devices under 0000:af:00.0: cvl_0_0 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:52.774 Found net devices under 0000:af:00.1: cvl_0_1 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.774 00:04:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:52.774 00:22:52.774 --- 10.0.0.2 ping statistics --- 00:22:52.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.774 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:52.774 00:22:52.774 --- 10.0.0.1 ping statistics --- 00:22:52.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.774 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3678634 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3678634 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3678634 ']' 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.774 00:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.774 [2024-05-15 00:04:53.349786] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:22:52.774 [2024-05-15 00:04:53.349836] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.035 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.035 [2024-05-15 00:04:53.422818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:53.035 [2024-05-15 00:04:53.494299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.035 [2024-05-15 00:04:53.494339] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.035 [2024-05-15 00:04:53.494353] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.035 [2024-05-15 00:04:53.494363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.035 [2024-05-15 00:04:53.494373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.035 [2024-05-15 00:04:53.494470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.035 [2024-05-15 00:04:53.494474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3678634 00:22:53.606 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:53.865 [2024-05-15 00:04:54.331151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.865 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:54.124 Malloc0 00:22:54.124 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:54.124 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.383 00:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.708 [2024-05-15 00:04:55.038917] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:54.708 [2024-05-15 00:04:55.039255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.708 [2024-05-15 00:04:55.195570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3678927 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3678927 /var/tmp/bdevperf.sock 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3678927 ']' 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.708 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.709 00:04:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:55.642 00:04:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.642 00:04:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:55.642 00:04:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:55.900 00:04:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:56.158 Nvme0n1 00:22:56.158 00:04:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:56.416 Nvme0n1 00:22:56.674 00:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:56.674 00:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.574 00:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:58.574 00:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:58.832 00:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:58.832 00:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.207 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.466 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.466 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.466 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.466 00:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.724 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.982 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.982 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:00.982 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:01.240 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:01.498 00:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:02.432 00:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:02.432 00:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:02.432 00:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.432 00:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.690 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.948 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.948 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:02.948 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.948 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.206 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:03.464 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.464 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:03.464 00:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:03.722 00:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:03.980 00:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:04.912 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:04.912 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:04.912 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.912 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.170 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.428 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.428 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.428 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.428 00:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.686 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.946 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.946 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:05.946 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:06.204 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:06.461 00:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:07.395 00:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:07.395 00:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:07.395 00:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.395 00:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.653 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.911 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.911 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.911 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.911 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.169 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.427 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.427 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:08.427 00:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:08.687 00:05:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:08.687 00:05:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.128 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.386 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:10.387 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.387 00:05:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:10.644 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.644 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:10.644 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.644 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:10.902 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.902 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:10.902 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:10.902 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:11.160 00:05:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:12.096 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:12.096 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:12.096 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.096 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:12.354 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.354 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:12.354 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.354 00:05:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.612 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:12.870 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.870 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:12.870 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.870 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:13.129 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.129 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:13.129 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.129 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:13.386 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.386 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:13.386 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:13.386 00:05:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:13.644 00:05:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.903 00:05:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:14.839 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:14.839 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:14.839 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.839 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.097 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.355 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.355 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.355 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.355 00:05:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.613 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.870 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.870 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.870 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:15.871 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:16.127 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:16.384 00:05:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:17.317 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:17.317 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:17.317 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.317 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.575 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.575 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.575 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.575 00:05:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.833 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.092 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.092 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.092 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.092 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:18.350 00:05:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.608 00:05:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:18.866 00:05:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:19.799 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:19.799 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.799 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.800 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.057 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.315 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.315 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.315 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.315 00:05:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.572 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.572 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.573 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.573 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:20.830 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:21.088 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:21.346 00:05:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:22.279 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:22.279 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.279 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.279 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.565 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.565 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:22.565 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.566 00:05:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.566 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.566 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.566 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.566 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.824 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.824 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.824 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.824 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.081 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.081 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.081 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.081 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.338 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.338 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3678927 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3678927 ']' 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3678927 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3678927 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3678927' 00:23:23.339 killing process with pid 3678927 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3678927 00:23:23.339 00:05:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3678927 00:23:23.599 Connection closed with partial response: 00:23:23.599 00:23:23.599 00:23:23.599 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3678927 00:23:23.599 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.599 [2024-05-15 00:04:55.254966] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:23:23.599 [2024-05-15 00:04:55.255021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678927 ] 00:23:23.599 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.599 [2024-05-15 00:04:55.321468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.599 [2024-05-15 00:04:55.396342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.599 Running I/O for 90 seconds... 00:23:23.599 [2024-05-15 00:05:09.076057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.599 [2024-05-15 00:05:09.076726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:23.599 [2024-05-15 00:05:09.076742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.076751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.076776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.076964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.076973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.077990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.600 [2024-05-15 00:05:09.078026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.600 [2024-05-15 00:05:09.078627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:23.600 [2024-05-15 00:05:09.078644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.078654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.078680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.078970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.078979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.601 [2024-05-15 00:05:09.079442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:09.079658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:09.079668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:21.729506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:21.729550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:21.729575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:21.729600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.601 [2024-05-15 00:05:21.729624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:23.601 [2024-05-15 00:05:21.729639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.729649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.729663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.729673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.729687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.729697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.732694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.732749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.732773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.602 [2024-05-15 00:05:21.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.602 [2024-05-15 00:05:21.732826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:23.602 [2024-05-15 00:05:21.732841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.602 [2024-05-15 00:05:21.732851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:23.602 Received shutdown signal, test time was about 26.758168 seconds 00:23:23.602 00:23:23.602 Latency(us) 00:23:23.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.602 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.602 Verification LBA range: start 0x0 length 0x4000 00:23:23.602 Nvme0n1 : 26.76 11190.27 43.71 0.00 0.00 11417.23 891.29 3019898.88 00:23:23.602 =================================================================================================================== 00:23:23.602 Total : 11190.27 43.71 0.00 0.00 11417.23 891.29 3019898.88 00:23:23.602 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.860 rmmod nvme_tcp 00:23:23.860 rmmod nvme_fabrics 00:23:23.860 rmmod nvme_keyring 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3678634 ']' 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3678634 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3678634 ']' 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3678634 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3678634 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3678634' 00:23:23.860 killing process with pid 3678634 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3678634 00:23:23.860 [2024-05-15 00:05:24.433458] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:23.860 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3678634 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.119 00:05:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.656 00:05:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.656 00:23:26.656 real 0m40.166s 00:23:26.656 user 1m42.713s 00:23:26.656 sys 0m14.018s 00:23:26.656 00:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.656 00:05:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.656 ************************************ 00:23:26.656 END TEST nvmf_host_multipath_status 00:23:26.656 ************************************ 00:23:26.656 00:05:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:26.656 00:05:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:26.656 00:05:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.656 00:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.656 ************************************ 00:23:26.656 START TEST nvmf_discovery_remove_ifc 00:23:26.656 ************************************ 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:26.656 * Looking for test storage... 00:23:26.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.656 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.657 00:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:33.214 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:33.214 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:33.214 Found net devices under 0000:af:00.0: cvl_0_0 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:33.214 Found net devices under 0000:af:00.1: cvl_0_1 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:23:33.214 00:23:33.214 --- 10.0.0.2 ping statistics --- 00:23:33.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.214 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:23:33.214 00:23:33.214 --- 10.0.0.1 ping statistics --- 00:23:33.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.214 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.214 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3687607 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3687607 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3687607 ']' 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.215 00:05:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.215 [2024-05-15 00:05:33.521878] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:23:33.215 [2024-05-15 00:05:33.521923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.215 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.215 [2024-05-15 00:05:33.593986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.215 [2024-05-15 00:05:33.665878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.215 [2024-05-15 00:05:33.665912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.215 [2024-05-15 00:05:33.665921] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.215 [2024-05-15 00:05:33.665930] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.215 [2024-05-15 00:05:33.665937] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.215 [2024-05-15 00:05:33.665958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.780 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.037 [2024-05-15 00:05:34.378895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.037 [2024-05-15 00:05:34.386869] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:34.037 [2024-05-15 00:05:34.387071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:34.037 null0 00:23:34.037 [2024-05-15 00:05:34.419052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3687855 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3687855 /tmp/host.sock 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3687855 ']' 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:34.037 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.037 00:05:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:34.037 [2024-05-15 00:05:34.487742] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:23:34.037 [2024-05-15 00:05:34.487786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687855 ] 00:23:34.037 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.037 [2024-05-15 00:05:34.556925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.294 [2024-05-15 00:05:34.631621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.858 00:05:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.792 [2024-05-15 00:05:36.377654] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:35.792 [2024-05-15 00:05:36.377678] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:35.792 [2024-05-15 00:05:36.377692] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.050 [2024-05-15 00:05:36.465959] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:36.308 [2024-05-15 00:05:36.690932] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:36.308 [2024-05-15 00:05:36.690973] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:36.308 [2024-05-15 00:05:36.690995] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:36.308 [2024-05-15 00:05:36.691008] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:36.308 [2024-05-15 00:05:36.691026] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.308 [2024-05-15 00:05:36.697474] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf1c7a0 was disconnected and freed. delete nvme_qpair. 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.308 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.309 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.309 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.309 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.566 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.566 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.566 00:05:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.500 00:05:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.435 00:05:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.435 00:05:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.692 00:05:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:38.692 00:05:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.627 00:05:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.560 00:05:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.969 [2024-05-15 00:05:42.131811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:41.969 [2024-05-15 00:05:42.131856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.969 [2024-05-15 00:05:42.131870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.969 [2024-05-15 00:05:42.131881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.969 [2024-05-15 00:05:42.131890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.969 [2024-05-15 00:05:42.131900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.969 [2024-05-15 00:05:42.131910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.969 [2024-05-15 00:05:42.131919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.969 [2024-05-15 00:05:42.131928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.969 [2024-05-15 00:05:42.131938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.969 [2024-05-15 00:05:42.131946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.969 [2024-05-15 00:05:42.131956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee38d0 is same with the state(5) to be set 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.969 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.970 [2024-05-15 00:05:42.141831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee38d0 (9): Bad file descriptor 00:23:41.970 [2024-05-15 00:05:42.151872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:41.970 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.970 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.970 00:05:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.904 [2024-05-15 00:05:43.176333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.904 00:05:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.838 [2024-05-15 00:05:44.199226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:43.838 [2024-05-15 00:05:44.199279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee38d0 with addr=10.0.0.2, port=4420 00:23:43.838 [2024-05-15 00:05:44.199298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee38d0 is same with the state(5) to be set 00:23:43.838 [2024-05-15 00:05:44.199704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee38d0 (9): Bad file descriptor 00:23:43.838 [2024-05-15 00:05:44.199737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.838 [2024-05-15 00:05:44.199761] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:43.838 [2024-05-15 00:05:44.199790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.838 [2024-05-15 00:05:44.199806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.838 [2024-05-15 00:05:44.199821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.838 [2024-05-15 00:05:44.199835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.838 [2024-05-15 00:05:44.199848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.838 [2024-05-15 00:05:44.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.838 [2024-05-15 00:05:44.199876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.838 [2024-05-15 00:05:44.199888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.838 [2024-05-15 00:05:44.199902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.838 [2024-05-15 00:05:44.199915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.838 [2024-05-15 00:05:44.199927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:43.838 [2024-05-15 00:05:44.200327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee2d60 (9): Bad file descriptor 00:23:43.838 [2024-05-15 00:05:44.201341] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:43.838 [2024-05-15 00:05:44.201359] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:43.838 00:05:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.838 00:05:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.838 00:05:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.771 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:45.028 00:05:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.740 [2024-05-15 00:05:46.252924] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:45.740 [2024-05-15 00:05:46.252944] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:45.740 [2024-05-15 00:05:46.252957] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:45.996 [2024-05-15 00:05:46.382362] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:45.996 00:05:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.252 [2024-05-15 00:05:46.606438] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:46.252 [2024-05-15 00:05:46.606472] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:46.252 [2024-05-15 00:05:46.606489] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:46.252 [2024-05-15 00:05:46.606503] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:46.252 [2024-05-15 00:05:46.606511] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.252 [2024-05-15 00:05:46.612337] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf26e50 was disconnected and freed. delete nvme_qpair. 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3687855 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3687855 ']' 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3687855 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3687855 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:47.185 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3687855' 00:23:47.185 killing process with pid 3687855 00:23:47.186 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3687855 00:23:47.186 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3687855 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.444 rmmod nvme_tcp 00:23:47.444 rmmod nvme_fabrics 00:23:47.444 rmmod nvme_keyring 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3687607 ']' 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3687607 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3687607 ']' 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3687607 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3687607 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3687607' 00:23:47.444 killing process with pid 3687607 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3687607 00:23:47.444 [2024-05-15 00:05:47.932066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:47.444 00:05:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3687607 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.702 00:05:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.234 00:05:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.234 00:23:50.234 real 0m23.383s 00:23:50.234 user 0m27.398s 00:23:50.234 sys 0m7.238s 00:23:50.234 00:05:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:50.234 00:05:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.234 ************************************ 00:23:50.234 END TEST nvmf_discovery_remove_ifc 00:23:50.234 ************************************ 00:23:50.234 00:05:50 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.234 00:05:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:50.234 00:05:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:50.234 00:05:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.234 ************************************ 00:23:50.234 START TEST nvmf_identify_kernel_target 00:23:50.235 ************************************ 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.235 * Looking for test storage... 00:23:50.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.235 00:05:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:56.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:56.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:56.788 Found net devices under 0000:af:00.0: cvl_0_0 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:56.788 Found net devices under 0000:af:00.1: cvl_0_1 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.788 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.789 00:05:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:23:56.789 00:23:56.789 --- 10.0.0.2 ping statistics --- 00:23:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.789 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:56.789 00:23:56.789 --- 10.0.0.1 ping statistics --- 00:23:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.789 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.789 00:05:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:59.316 Waiting for block devices as requested 00:23:59.575 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:59.575 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:59.575 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:59.834 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:59.834 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:59.834 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:59.834 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:00.093 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:00.093 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:00.093 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:00.353 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:00.353 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:00.353 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:00.353 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:00.612 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:00.612 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:00.612 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:00.882 No valid GPT data, bailing 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:00.882 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:24:01.156 00:24:01.156 Discovery Log Number of Records 2, Generation counter 2 00:24:01.156 =====Discovery Log Entry 0====== 00:24:01.156 trtype: tcp 00:24:01.156 adrfam: ipv4 00:24:01.156 subtype: current discovery subsystem 00:24:01.156 treq: not specified, sq flow control disable supported 00:24:01.156 portid: 1 00:24:01.156 trsvcid: 4420 00:24:01.156 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:01.156 traddr: 10.0.0.1 00:24:01.156 eflags: none 00:24:01.156 sectype: none 00:24:01.156 =====Discovery Log Entry 1====== 00:24:01.156 trtype: tcp 00:24:01.156 adrfam: ipv4 00:24:01.156 subtype: nvme subsystem 00:24:01.156 treq: not specified, sq flow control disable supported 00:24:01.156 portid: 1 00:24:01.156 trsvcid: 4420 00:24:01.156 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:01.156 traddr: 10.0.0.1 00:24:01.156 eflags: none 00:24:01.156 sectype: none 00:24:01.156 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:01.156 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:01.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.156 ===================================================== 00:24:01.156 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:01.156 ===================================================== 00:24:01.156 Controller Capabilities/Features 00:24:01.156 ================================ 00:24:01.156 Vendor ID: 0000 00:24:01.156 Subsystem Vendor ID: 0000 00:24:01.156 Serial Number: 8683ef0bb2c0befa7e2c 00:24:01.156 Model Number: Linux 00:24:01.156 Firmware Version: 6.7.0-68 00:24:01.156 Recommended Arb Burst: 0 00:24:01.156 IEEE OUI Identifier: 00 00 00 00:24:01.156 Multi-path I/O 00:24:01.156 May have multiple subsystem ports: No 00:24:01.156 May have multiple controllers: No 00:24:01.156 Associated with SR-IOV VF: No 00:24:01.156 Max Data Transfer Size: Unlimited 00:24:01.156 Max Number of Namespaces: 0 00:24:01.156 Max Number of I/O Queues: 1024 00:24:01.156 NVMe Specification Version (VS): 1.3 00:24:01.156 NVMe Specification Version (Identify): 1.3 00:24:01.156 Maximum Queue Entries: 1024 00:24:01.156 Contiguous Queues Required: No 00:24:01.156 Arbitration Mechanisms Supported 00:24:01.156 Weighted Round Robin: Not Supported 00:24:01.156 Vendor Specific: Not Supported 00:24:01.156 Reset Timeout: 7500 ms 00:24:01.156 Doorbell Stride: 4 bytes 00:24:01.156 NVM Subsystem Reset: Not Supported 00:24:01.156 Command Sets Supported 00:24:01.156 NVM Command Set: Supported 00:24:01.156 Boot Partition: Not Supported 00:24:01.156 Memory Page Size Minimum: 4096 bytes 00:24:01.156 Memory Page Size Maximum: 4096 bytes 00:24:01.156 Persistent Memory Region: Not Supported 00:24:01.156 Optional Asynchronous Events Supported 00:24:01.156 Namespace Attribute Notices: Not Supported 00:24:01.156 Firmware Activation Notices: Not Supported 00:24:01.156 ANA Change Notices: Not Supported 00:24:01.156 PLE Aggregate Log Change Notices: Not Supported 00:24:01.156 LBA Status Info Alert Notices: Not Supported 00:24:01.156 EGE Aggregate Log Change Notices: Not Supported 00:24:01.156 Normal NVM Subsystem Shutdown event: Not Supported 00:24:01.156 Zone Descriptor Change Notices: Not Supported 00:24:01.156 Discovery Log Change Notices: Supported 00:24:01.156 Controller Attributes 00:24:01.156 128-bit Host Identifier: Not Supported 00:24:01.156 Non-Operational Permissive Mode: Not Supported 00:24:01.156 NVM Sets: Not Supported 00:24:01.156 Read Recovery Levels: Not Supported 00:24:01.156 Endurance Groups: Not Supported 00:24:01.156 Predictable Latency Mode: Not Supported 00:24:01.156 Traffic Based Keep ALive: Not Supported 00:24:01.156 Namespace Granularity: Not Supported 00:24:01.156 SQ Associations: Not Supported 00:24:01.156 UUID List: Not Supported 00:24:01.156 Multi-Domain Subsystem: Not Supported 00:24:01.156 Fixed Capacity Management: Not Supported 00:24:01.156 Variable Capacity Management: Not Supported 00:24:01.156 Delete Endurance Group: Not Supported 00:24:01.156 Delete NVM Set: Not Supported 00:24:01.156 Extended LBA Formats Supported: Not Supported 00:24:01.156 Flexible Data Placement Supported: Not Supported 00:24:01.156 00:24:01.156 Controller Memory Buffer Support 00:24:01.156 ================================ 00:24:01.156 Supported: No 00:24:01.156 00:24:01.156 Persistent Memory Region Support 00:24:01.156 ================================ 00:24:01.156 Supported: No 00:24:01.156 00:24:01.156 Admin Command Set Attributes 00:24:01.156 ============================ 00:24:01.156 Security Send/Receive: Not Supported 00:24:01.156 Format NVM: Not Supported 00:24:01.156 Firmware Activate/Download: Not Supported 00:24:01.156 Namespace Management: Not Supported 00:24:01.156 Device Self-Test: Not Supported 00:24:01.156 Directives: Not Supported 00:24:01.156 NVMe-MI: Not Supported 00:24:01.156 Virtualization Management: Not Supported 00:24:01.156 Doorbell Buffer Config: Not Supported 00:24:01.156 Get LBA Status Capability: Not Supported 00:24:01.156 Command & Feature Lockdown Capability: Not Supported 00:24:01.156 Abort Command Limit: 1 00:24:01.156 Async Event Request Limit: 1 00:24:01.156 Number of Firmware Slots: N/A 00:24:01.156 Firmware Slot 1 Read-Only: N/A 00:24:01.156 Firmware Activation Without Reset: N/A 00:24:01.156 Multiple Update Detection Support: N/A 00:24:01.156 Firmware Update Granularity: No Information Provided 00:24:01.156 Per-Namespace SMART Log: No 00:24:01.156 Asymmetric Namespace Access Log Page: Not Supported 00:24:01.156 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:01.156 Command Effects Log Page: Not Supported 00:24:01.156 Get Log Page Extended Data: Supported 00:24:01.156 Telemetry Log Pages: Not Supported 00:24:01.156 Persistent Event Log Pages: Not Supported 00:24:01.156 Supported Log Pages Log Page: May Support 00:24:01.156 Commands Supported & Effects Log Page: Not Supported 00:24:01.156 Feature Identifiers & Effects Log Page:May Support 00:24:01.156 NVMe-MI Commands & Effects Log Page: May Support 00:24:01.156 Data Area 4 for Telemetry Log: Not Supported 00:24:01.156 Error Log Page Entries Supported: 1 00:24:01.156 Keep Alive: Not Supported 00:24:01.156 00:24:01.156 NVM Command Set Attributes 00:24:01.156 ========================== 00:24:01.156 Submission Queue Entry Size 00:24:01.156 Max: 1 00:24:01.156 Min: 1 00:24:01.156 Completion Queue Entry Size 00:24:01.156 Max: 1 00:24:01.156 Min: 1 00:24:01.156 Number of Namespaces: 0 00:24:01.156 Compare Command: Not Supported 00:24:01.156 Write Uncorrectable Command: Not Supported 00:24:01.156 Dataset Management Command: Not Supported 00:24:01.156 Write Zeroes Command: Not Supported 00:24:01.156 Set Features Save Field: Not Supported 00:24:01.156 Reservations: Not Supported 00:24:01.156 Timestamp: Not Supported 00:24:01.156 Copy: Not Supported 00:24:01.156 Volatile Write Cache: Not Present 00:24:01.156 Atomic Write Unit (Normal): 1 00:24:01.156 Atomic Write Unit (PFail): 1 00:24:01.156 Atomic Compare & Write Unit: 1 00:24:01.156 Fused Compare & Write: Not Supported 00:24:01.156 Scatter-Gather List 00:24:01.156 SGL Command Set: Supported 00:24:01.156 SGL Keyed: Not Supported 00:24:01.156 SGL Bit Bucket Descriptor: Not Supported 00:24:01.156 SGL Metadata Pointer: Not Supported 00:24:01.156 Oversized SGL: Not Supported 00:24:01.156 SGL Metadata Address: Not Supported 00:24:01.156 SGL Offset: Supported 00:24:01.157 Transport SGL Data Block: Not Supported 00:24:01.157 Replay Protected Memory Block: Not Supported 00:24:01.157 00:24:01.157 Firmware Slot Information 00:24:01.157 ========================= 00:24:01.157 Active slot: 0 00:24:01.157 00:24:01.157 00:24:01.157 Error Log 00:24:01.157 ========= 00:24:01.157 00:24:01.157 Active Namespaces 00:24:01.157 ================= 00:24:01.157 Discovery Log Page 00:24:01.157 ================== 00:24:01.157 Generation Counter: 2 00:24:01.157 Number of Records: 2 00:24:01.157 Record Format: 0 00:24:01.157 00:24:01.157 Discovery Log Entry 0 00:24:01.157 ---------------------- 00:24:01.157 Transport Type: 3 (TCP) 00:24:01.157 Address Family: 1 (IPv4) 00:24:01.157 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:01.157 Entry Flags: 00:24:01.157 Duplicate Returned Information: 0 00:24:01.157 Explicit Persistent Connection Support for Discovery: 0 00:24:01.157 Transport Requirements: 00:24:01.157 Secure Channel: Not Specified 00:24:01.157 Port ID: 1 (0x0001) 00:24:01.157 Controller ID: 65535 (0xffff) 00:24:01.157 Admin Max SQ Size: 32 00:24:01.157 Transport Service Identifier: 4420 00:24:01.157 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:01.157 Transport Address: 10.0.0.1 00:24:01.157 Discovery Log Entry 1 00:24:01.157 ---------------------- 00:24:01.157 Transport Type: 3 (TCP) 00:24:01.157 Address Family: 1 (IPv4) 00:24:01.157 Subsystem Type: 2 (NVM Subsystem) 00:24:01.157 Entry Flags: 00:24:01.157 Duplicate Returned Information: 0 00:24:01.157 Explicit Persistent Connection Support for Discovery: 0 00:24:01.157 Transport Requirements: 00:24:01.157 Secure Channel: Not Specified 00:24:01.157 Port ID: 1 (0x0001) 00:24:01.157 Controller ID: 65535 (0xffff) 00:24:01.157 Admin Max SQ Size: 32 00:24:01.157 Transport Service Identifier: 4420 00:24:01.157 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:01.157 Transport Address: 10.0.0.1 00:24:01.157 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.157 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.157 get_feature(0x01) failed 00:24:01.157 get_feature(0x02) failed 00:24:01.157 get_feature(0x04) failed 00:24:01.157 ===================================================== 00:24:01.157 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:01.157 ===================================================== 00:24:01.157 Controller Capabilities/Features 00:24:01.157 ================================ 00:24:01.157 Vendor ID: 0000 00:24:01.157 Subsystem Vendor ID: 0000 00:24:01.157 Serial Number: 22421167e34cab733ad4 00:24:01.157 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:01.157 Firmware Version: 6.7.0-68 00:24:01.157 Recommended Arb Burst: 6 00:24:01.157 IEEE OUI Identifier: 00 00 00 00:24:01.157 Multi-path I/O 00:24:01.157 May have multiple subsystem ports: Yes 00:24:01.157 May have multiple controllers: Yes 00:24:01.157 Associated with SR-IOV VF: No 00:24:01.157 Max Data Transfer Size: Unlimited 00:24:01.157 Max Number of Namespaces: 1024 00:24:01.157 Max Number of I/O Queues: 128 00:24:01.157 NVMe Specification Version (VS): 1.3 00:24:01.157 NVMe Specification Version (Identify): 1.3 00:24:01.157 Maximum Queue Entries: 1024 00:24:01.157 Contiguous Queues Required: No 00:24:01.157 Arbitration Mechanisms Supported 00:24:01.157 Weighted Round Robin: Not Supported 00:24:01.157 Vendor Specific: Not Supported 00:24:01.157 Reset Timeout: 7500 ms 00:24:01.157 Doorbell Stride: 4 bytes 00:24:01.157 NVM Subsystem Reset: Not Supported 00:24:01.157 Command Sets Supported 00:24:01.157 NVM Command Set: Supported 00:24:01.157 Boot Partition: Not Supported 00:24:01.157 Memory Page Size Minimum: 4096 bytes 00:24:01.157 Memory Page Size Maximum: 4096 bytes 00:24:01.157 Persistent Memory Region: Not Supported 00:24:01.157 Optional Asynchronous Events Supported 00:24:01.157 Namespace Attribute Notices: Supported 00:24:01.157 Firmware Activation Notices: Not Supported 00:24:01.157 ANA Change Notices: Supported 00:24:01.157 PLE Aggregate Log Change Notices: Not Supported 00:24:01.157 LBA Status Info Alert Notices: Not Supported 00:24:01.157 EGE Aggregate Log Change Notices: Not Supported 00:24:01.157 Normal NVM Subsystem Shutdown event: Not Supported 00:24:01.157 Zone Descriptor Change Notices: Not Supported 00:24:01.157 Discovery Log Change Notices: Not Supported 00:24:01.157 Controller Attributes 00:24:01.157 128-bit Host Identifier: Supported 00:24:01.157 Non-Operational Permissive Mode: Not Supported 00:24:01.157 NVM Sets: Not Supported 00:24:01.157 Read Recovery Levels: Not Supported 00:24:01.157 Endurance Groups: Not Supported 00:24:01.157 Predictable Latency Mode: Not Supported 00:24:01.157 Traffic Based Keep ALive: Supported 00:24:01.157 Namespace Granularity: Not Supported 00:24:01.157 SQ Associations: Not Supported 00:24:01.157 UUID List: Not Supported 00:24:01.157 Multi-Domain Subsystem: Not Supported 00:24:01.157 Fixed Capacity Management: Not Supported 00:24:01.157 Variable Capacity Management: Not Supported 00:24:01.157 Delete Endurance Group: Not Supported 00:24:01.157 Delete NVM Set: Not Supported 00:24:01.157 Extended LBA Formats Supported: Not Supported 00:24:01.157 Flexible Data Placement Supported: Not Supported 00:24:01.157 00:24:01.157 Controller Memory Buffer Support 00:24:01.157 ================================ 00:24:01.157 Supported: No 00:24:01.157 00:24:01.157 Persistent Memory Region Support 00:24:01.157 ================================ 00:24:01.157 Supported: No 00:24:01.157 00:24:01.157 Admin Command Set Attributes 00:24:01.157 ============================ 00:24:01.157 Security Send/Receive: Not Supported 00:24:01.157 Format NVM: Not Supported 00:24:01.157 Firmware Activate/Download: Not Supported 00:24:01.157 Namespace Management: Not Supported 00:24:01.157 Device Self-Test: Not Supported 00:24:01.157 Directives: Not Supported 00:24:01.157 NVMe-MI: Not Supported 00:24:01.157 Virtualization Management: Not Supported 00:24:01.157 Doorbell Buffer Config: Not Supported 00:24:01.157 Get LBA Status Capability: Not Supported 00:24:01.157 Command & Feature Lockdown Capability: Not Supported 00:24:01.157 Abort Command Limit: 4 00:24:01.157 Async Event Request Limit: 4 00:24:01.157 Number of Firmware Slots: N/A 00:24:01.157 Firmware Slot 1 Read-Only: N/A 00:24:01.157 Firmware Activation Without Reset: N/A 00:24:01.157 Multiple Update Detection Support: N/A 00:24:01.157 Firmware Update Granularity: No Information Provided 00:24:01.157 Per-Namespace SMART Log: Yes 00:24:01.157 Asymmetric Namespace Access Log Page: Supported 00:24:01.157 ANA Transition Time : 10 sec 00:24:01.157 00:24:01.157 Asymmetric Namespace Access Capabilities 00:24:01.157 ANA Optimized State : Supported 00:24:01.157 ANA Non-Optimized State : Supported 00:24:01.157 ANA Inaccessible State : Supported 00:24:01.157 ANA Persistent Loss State : Supported 00:24:01.157 ANA Change State : Supported 00:24:01.157 ANAGRPID is not changed : No 00:24:01.157 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:01.157 00:24:01.157 ANA Group Identifier Maximum : 128 00:24:01.157 Number of ANA Group Identifiers : 128 00:24:01.157 Max Number of Allowed Namespaces : 1024 00:24:01.157 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:01.157 Command Effects Log Page: Supported 00:24:01.157 Get Log Page Extended Data: Supported 00:24:01.157 Telemetry Log Pages: Not Supported 00:24:01.157 Persistent Event Log Pages: Not Supported 00:24:01.157 Supported Log Pages Log Page: May Support 00:24:01.157 Commands Supported & Effects Log Page: Not Supported 00:24:01.157 Feature Identifiers & Effects Log Page:May Support 00:24:01.157 NVMe-MI Commands & Effects Log Page: May Support 00:24:01.157 Data Area 4 for Telemetry Log: Not Supported 00:24:01.157 Error Log Page Entries Supported: 128 00:24:01.157 Keep Alive: Supported 00:24:01.157 Keep Alive Granularity: 1000 ms 00:24:01.157 00:24:01.157 NVM Command Set Attributes 00:24:01.157 ========================== 00:24:01.157 Submission Queue Entry Size 00:24:01.157 Max: 64 00:24:01.157 Min: 64 00:24:01.157 Completion Queue Entry Size 00:24:01.157 Max: 16 00:24:01.157 Min: 16 00:24:01.157 Number of Namespaces: 1024 00:24:01.157 Compare Command: Not Supported 00:24:01.157 Write Uncorrectable Command: Not Supported 00:24:01.157 Dataset Management Command: Supported 00:24:01.157 Write Zeroes Command: Supported 00:24:01.157 Set Features Save Field: Not Supported 00:24:01.157 Reservations: Not Supported 00:24:01.157 Timestamp: Not Supported 00:24:01.157 Copy: Not Supported 00:24:01.157 Volatile Write Cache: Present 00:24:01.157 Atomic Write Unit (Normal): 1 00:24:01.157 Atomic Write Unit (PFail): 1 00:24:01.157 Atomic Compare & Write Unit: 1 00:24:01.157 Fused Compare & Write: Not Supported 00:24:01.157 Scatter-Gather List 00:24:01.157 SGL Command Set: Supported 00:24:01.157 SGL Keyed: Not Supported 00:24:01.157 SGL Bit Bucket Descriptor: Not Supported 00:24:01.158 SGL Metadata Pointer: Not Supported 00:24:01.158 Oversized SGL: Not Supported 00:24:01.158 SGL Metadata Address: Not Supported 00:24:01.158 SGL Offset: Supported 00:24:01.158 Transport SGL Data Block: Not Supported 00:24:01.158 Replay Protected Memory Block: Not Supported 00:24:01.158 00:24:01.158 Firmware Slot Information 00:24:01.158 ========================= 00:24:01.158 Active slot: 0 00:24:01.158 00:24:01.158 Asymmetric Namespace Access 00:24:01.158 =========================== 00:24:01.158 Change Count : 0 00:24:01.158 Number of ANA Group Descriptors : 1 00:24:01.158 ANA Group Descriptor : 0 00:24:01.158 ANA Group ID : 1 00:24:01.158 Number of NSID Values : 1 00:24:01.158 Change Count : 0 00:24:01.158 ANA State : 1 00:24:01.158 Namespace Identifier : 1 00:24:01.158 00:24:01.158 Commands Supported and Effects 00:24:01.158 ============================== 00:24:01.158 Admin Commands 00:24:01.158 -------------- 00:24:01.158 Get Log Page (02h): Supported 00:24:01.158 Identify (06h): Supported 00:24:01.158 Abort (08h): Supported 00:24:01.158 Set Features (09h): Supported 00:24:01.158 Get Features (0Ah): Supported 00:24:01.158 Asynchronous Event Request (0Ch): Supported 00:24:01.158 Keep Alive (18h): Supported 00:24:01.158 I/O Commands 00:24:01.158 ------------ 00:24:01.158 Flush (00h): Supported 00:24:01.158 Write (01h): Supported LBA-Change 00:24:01.158 Read (02h): Supported 00:24:01.158 Write Zeroes (08h): Supported LBA-Change 00:24:01.158 Dataset Management (09h): Supported 00:24:01.158 00:24:01.158 Error Log 00:24:01.158 ========= 00:24:01.158 Entry: 0 00:24:01.158 Error Count: 0x3 00:24:01.158 Submission Queue Id: 0x0 00:24:01.158 Command Id: 0x5 00:24:01.158 Phase Bit: 0 00:24:01.158 Status Code: 0x2 00:24:01.158 Status Code Type: 0x0 00:24:01.158 Do Not Retry: 1 00:24:01.158 Error Location: 0x28 00:24:01.158 LBA: 0x0 00:24:01.158 Namespace: 0x0 00:24:01.158 Vendor Log Page: 0x0 00:24:01.158 ----------- 00:24:01.158 Entry: 1 00:24:01.158 Error Count: 0x2 00:24:01.158 Submission Queue Id: 0x0 00:24:01.158 Command Id: 0x5 00:24:01.158 Phase Bit: 0 00:24:01.158 Status Code: 0x2 00:24:01.158 Status Code Type: 0x0 00:24:01.158 Do Not Retry: 1 00:24:01.158 Error Location: 0x28 00:24:01.158 LBA: 0x0 00:24:01.158 Namespace: 0x0 00:24:01.158 Vendor Log Page: 0x0 00:24:01.158 ----------- 00:24:01.158 Entry: 2 00:24:01.158 Error Count: 0x1 00:24:01.158 Submission Queue Id: 0x0 00:24:01.158 Command Id: 0x4 00:24:01.158 Phase Bit: 0 00:24:01.158 Status Code: 0x2 00:24:01.158 Status Code Type: 0x0 00:24:01.158 Do Not Retry: 1 00:24:01.158 Error Location: 0x28 00:24:01.158 LBA: 0x0 00:24:01.158 Namespace: 0x0 00:24:01.158 Vendor Log Page: 0x0 00:24:01.158 00:24:01.158 Number of Queues 00:24:01.158 ================ 00:24:01.158 Number of I/O Submission Queues: 128 00:24:01.158 Number of I/O Completion Queues: 128 00:24:01.158 00:24:01.158 ZNS Specific Controller Data 00:24:01.158 ============================ 00:24:01.158 Zone Append Size Limit: 0 00:24:01.158 00:24:01.158 00:24:01.158 Active Namespaces 00:24:01.158 ================= 00:24:01.158 get_feature(0x05) failed 00:24:01.158 Namespace ID:1 00:24:01.158 Command Set Identifier: NVM (00h) 00:24:01.158 Deallocate: Supported 00:24:01.158 Deallocated/Unwritten Error: Not Supported 00:24:01.158 Deallocated Read Value: Unknown 00:24:01.158 Deallocate in Write Zeroes: Not Supported 00:24:01.158 Deallocated Guard Field: 0xFFFF 00:24:01.158 Flush: Supported 00:24:01.158 Reservation: Not Supported 00:24:01.158 Namespace Sharing Capabilities: Multiple Controllers 00:24:01.158 Size (in LBAs): 3125627568 (1490GiB) 00:24:01.158 Capacity (in LBAs): 3125627568 (1490GiB) 00:24:01.158 Utilization (in LBAs): 3125627568 (1490GiB) 00:24:01.158 UUID: 7ae285cf-8689-4aab-8273-1d9704af68cf 00:24:01.158 Thin Provisioning: Not Supported 00:24:01.158 Per-NS Atomic Units: Yes 00:24:01.158 Atomic Boundary Size (Normal): 0 00:24:01.158 Atomic Boundary Size (PFail): 0 00:24:01.158 Atomic Boundary Offset: 0 00:24:01.158 NGUID/EUI64 Never Reused: No 00:24:01.158 ANA group ID: 1 00:24:01.158 Namespace Write Protected: No 00:24:01.158 Number of LBA Formats: 1 00:24:01.158 Current LBA Format: LBA Format #00 00:24:01.158 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:01.158 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.158 rmmod nvme_tcp 00:24:01.158 rmmod nvme_fabrics 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.158 00:06:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:03.702 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:03.703 00:06:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:06.983 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:06.983 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:08.359 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:24:08.359 00:24:08.359 real 0m18.542s 00:24:08.359 user 0m4.386s 00:24:08.359 sys 0m9.806s 00:24:08.359 00:06:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.359 00:06:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.359 ************************************ 00:24:08.359 END TEST nvmf_identify_kernel_target 00:24:08.359 ************************************ 00:24:08.359 00:06:08 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:08.359 00:06:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:08.359 00:06:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.359 00:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.359 ************************************ 00:24:08.359 START TEST nvmf_auth 00:24:08.359 ************************************ 00:24:08.359 00:06:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:08.618 * Looking for test storage... 00:24:08.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.618 00:06:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.176 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:15.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:15.177 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:15.177 Found net devices under 0000:af:00.0: cvl_0_0 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:15.177 Found net devices under 0000:af:00.1: cvl_0_1 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.177 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:24:15.436 00:24:15.436 --- 10.0.0.2 ping statistics --- 00:24:15.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.436 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:15.436 00:24:15.436 --- 10.0.0.1 ping statistics --- 00:24:15.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.436 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=3700979 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 3700979 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3700979 ']' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:15.436 00:06:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:24:16.370 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ec3459276d3de89e3fe1910fdd475636 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.hJz 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ec3459276d3de89e3fe1910fdd475636 0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ec3459276d3de89e3fe1910fdd475636 0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ec3459276d3de89e3fe1910fdd475636 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.hJz 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.hJz 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.hJz 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=b080b93880debc379820bb9ee2c305075774909400d9c947878ac2fc2dde37c1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.3HY 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key b080b93880debc379820bb9ee2c305075774909400d9c947878ac2fc2dde37c1 3 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 b080b93880debc379820bb9ee2c305075774909400d9c947878ac2fc2dde37c1 3 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=b080b93880debc379820bb9ee2c305075774909400d9c947878ac2fc2dde37c1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.3HY 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.3HY 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.3HY 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e2eaeb0b7200b2e248c12229ca52932ed88a39100aea2caf 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.8bA 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e2eaeb0b7200b2e248c12229ca52932ed88a39100aea2caf 0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e2eaeb0b7200b2e248c12229ca52932ed88a39100aea2caf 0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e2eaeb0b7200b2e248c12229ca52932ed88a39100aea2caf 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.8bA 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.8bA 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.8bA 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=65ed7e0374f277fd333cb4083cfae46ba88aa1f714d6d11b 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.zqM 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 65ed7e0374f277fd333cb4083cfae46ba88aa1f714d6d11b 2 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 65ed7e0374f277fd333cb4083cfae46ba88aa1f714d6d11b 2 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=65ed7e0374f277fd333cb4083cfae46ba88aa1f714d6d11b 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:24:16.371 00:06:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.629 00:06:16 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.zqM 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.zqM 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.zqM 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=09f3ab6570885e9e0918699d3563e1cb 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.jjX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 09f3ab6570885e9e0918699d3563e1cb 1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 09f3ab6570885e9e0918699d3563e1cb 1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=09f3ab6570885e9e0918699d3563e1cb 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.jjX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.jjX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.jjX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e43e7b83c2ce5b2604f64dbdecde2a27 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Qon 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e43e7b83c2ce5b2604f64dbdecde2a27 1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e43e7b83c2ce5b2604f64dbdecde2a27 1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e43e7b83c2ce5b2604f64dbdecde2a27 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Qon 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Qon 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.Qon 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ed8d945c57bf9a68a9816efa9d721bfb39163fc7a05fd844 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.kUS 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ed8d945c57bf9a68a9816efa9d721bfb39163fc7a05fd844 2 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ed8d945c57bf9a68a9816efa9d721bfb39163fc7a05fd844 2 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ed8d945c57bf9a68a9816efa9d721bfb39163fc7a05fd844 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.kUS 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.kUS 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.kUS 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=bd71f44bf7cd3a124f19b42e8be732a2 00:24:16.629 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.Wj2 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key bd71f44bf7cd3a124f19b42e8be732a2 0 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 bd71f44bf7cd3a124f19b42e8be732a2 0 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=bd71f44bf7cd3a124f19b42e8be732a2 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.Wj2 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.Wj2 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.Wj2 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=854817d9bdab4679007fe63f703a979eeec17dfa7f6638896ceb92f4cf27ed93 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.bpI 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 854817d9bdab4679007fe63f703a979eeec17dfa7f6638896ceb92f4cf27ed93 3 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 854817d9bdab4679007fe63f703a979eeec17dfa7f6638896ceb92f4cf27ed93 3 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=854817d9bdab4679007fe63f703a979eeec17dfa7f6638896ceb92f4cf27ed93 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.bpI 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.bpI 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.bpI 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 3700979 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3700979 ']' 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:16.887 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hJz 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.3HY ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3HY 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.8bA 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.zqM ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zqM 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jjX 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.Qon ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qon 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kUS 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.Wj2 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Wj2 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bpI 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:17.145 00:06:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:20.423 Waiting for block devices as requested 00:24:20.423 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:20.423 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:20.423 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:20.423 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:20.680 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:20.680 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:20.680 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:20.680 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:20.938 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:20.938 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:20.938 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:21.205 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:21.205 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:21.205 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:21.463 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:21.463 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:21.463 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:22.425 No valid GPT data, bailing 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:24:22.425 00:24:22.425 Discovery Log Number of Records 2, Generation counter 2 00:24:22.425 =====Discovery Log Entry 0====== 00:24:22.425 trtype: tcp 00:24:22.425 adrfam: ipv4 00:24:22.425 subtype: current discovery subsystem 00:24:22.425 treq: not specified, sq flow control disable supported 00:24:22.425 portid: 1 00:24:22.425 trsvcid: 4420 00:24:22.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:22.425 traddr: 10.0.0.1 00:24:22.425 eflags: none 00:24:22.425 sectype: none 00:24:22.425 =====Discovery Log Entry 1====== 00:24:22.425 trtype: tcp 00:24:22.425 adrfam: ipv4 00:24:22.425 subtype: nvme subsystem 00:24:22.425 treq: not specified, sq flow control disable supported 00:24:22.425 portid: 1 00:24:22.425 trsvcid: 4420 00:24:22.425 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:22.425 traddr: 10.0.0.1 00:24:22.425 eflags: none 00:24:22.425 sectype: none 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.425 00:06:22 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.683 nvme0n1 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.683 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.684 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.942 nvme0n1 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:22.942 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.943 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.201 nvme0n1 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.201 nvme0n1 00:24:23.201 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:23.458 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.459 00:06:23 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.459 nvme0n1 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.459 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.717 nvme0n1 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.717 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.718 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.976 nvme0n1 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.976 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.977 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.235 nvme0n1 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.235 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.493 nvme0n1 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.493 00:06:24 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.493 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.751 nvme0n1 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.751 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.008 nvme0n1 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.008 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.266 nvme0n1 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.266 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.524 00:06:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.524 nvme0n1 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.524 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.782 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.783 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.040 nvme0n1 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:26.040 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.041 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.298 nvme0n1 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.298 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.299 00:06:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 nvme0n1 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.556 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.557 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 nvme0n1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.380 nvme0n1 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.380 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.638 00:06:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.896 nvme0n1 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:27.896 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.897 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.461 nvme0n1 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:28.461 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.462 00:06:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.719 nvme0n1 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.719 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:29.284 nvme0n1 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:29.284 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.542 00:06:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.108 nvme0n1 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.108 00:06:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.674 nvme0n1 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:30.674 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.675 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 nvme0n1 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.240 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.497 00:06:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.063 nvme0n1 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:32.063 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 nvme0n1 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.064 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 nvme0n1 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:32.322 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:32.323 00:06:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.581 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.581 00:06:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.581 nvme0n1 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.581 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.839 nvme0n1 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.839 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.097 nvme0n1 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.097 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.098 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 nvme0n1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.356 00:06:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.614 nvme0n1 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:33.614 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.615 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.873 nvme0n1 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.874 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 nvme0n1 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.448 nvme0n1 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:34.448 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.449 00:06:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 nvme0n1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.726 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.985 nvme0n1 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.985 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.244 nvme0n1 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:35.244 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.245 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.504 nvme0n1 00:24:35.504 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.504 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.504 00:06:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:35.504 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.504 00:06:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.504 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 nvme0n1 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.763 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.021 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.022 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.281 nvme0n1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.281 00:06:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.848 nvme0n1 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.848 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.106 nvme0n1 00:24:37.106 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.106 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.106 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.106 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:37.106 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.107 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.107 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.107 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.107 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.107 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.365 00:06:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.623 nvme0n1 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.623 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.190 nvme0n1 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.190 00:06:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.757 nvme0n1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.757 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.324 nvme0n1 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.324 00:06:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.891 nvme0n1 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:39.891 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.150 00:06:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 nvme0n1 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.717 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.283 nvme0n1 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:41.283 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.284 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.542 nvme0n1 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.542 00:06:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:41.542 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.543 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.801 nvme0n1 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:41.801 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:41.802 nvme0n1 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.802 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 nvme0n1 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.061 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 nvme0n1 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.320 00:06:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.579 nvme0n1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.579 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 nvme0n1 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:42.838 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.839 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.098 nvme0n1 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.098 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.357 nvme0n1 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.357 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.358 00:06:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.616 nvme0n1 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.616 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.617 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.876 nvme0n1 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:43.876 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.877 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 nvme0n1 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.136 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 nvme0n1 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.395 00:06:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.654 00:06:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:44.654 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.655 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 nvme0n1 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:44.914 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.915 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.174 nvme0n1 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.174 00:06:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.743 nvme0n1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.743 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.002 nvme0n1 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:46.002 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.003 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.570 nvme0n1 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.570 00:06:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.829 nvme0n1 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.829 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.099 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.099 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:47.099 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.100 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.401 nvme0n1 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZWMzNDU5Mjc2ZDNkZTg5ZTNmZTE5MTBmZGQ0NzU2MzYKRTsw: 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:YjA4MGI5Mzg4MGRlYmMzNzk4MjBiYjllZTJjMzA1MDc1Nzc0OTA5NDAwZDljOTQ3ODc4YWMyZmMyZGRlMzdjMZ5FC4E=: 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.401 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:47.402 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:47.402 00:06:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:47.402 00:06:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.402 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.402 00:06:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 nvme0n1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.969 00:06:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 nvme0n1 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MDlmM2FiNjU3MDg4NWU5ZTA5MTg2OTlkMzU2M2UxY2I7iK9Y: 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzZTdiODNjMmNlNWIyNjA0ZjY0ZGJkZWNkZTJhMjdUIKI+: 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.535 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.101 nvme0n1 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.101 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ4ZDk0NWM1N2JmOWE2OGE5ODE2ZWZhOWQ3MjFiZmIzOTE2M2ZjN2EwNWZkODQ00LyfXg==: 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YmQ3MWY0NGJmN2NkM2ExMjRmMTliNDJlOGJlNzMyYTIHG5Ep: 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.360 00:06:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.926 nvme0n1 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.926 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ODU0ODE3ZDliZGFiNDY3OTAwN2ZlNjNmNzAzYTk3OWVlZWMxN2RmYTdmNjYzODg5NmNlYjkyZjRjZjI3ZWQ5M2dTRkg=: 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.927 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.493 nvme0n1 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJlYWViMGI3MjAwYjJlMjQ4YzEyMjI5Y2E1MjkzMmVkODhhMzkxMDBhZWEyY2Fm9Sn4Ew==: 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NjVlZDdlMDM3NGYyNzdmZDMzM2NiNDA4M2NmYWU0NmJhODhhYTFmNzE0ZDZkMTFiN24pTg==: 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.493 00:06:50 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:50.493 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.494 request: 00:24:50.494 { 00:24:50.494 "name": "nvme0", 00:24:50.494 "trtype": "tcp", 00:24:50.494 "traddr": "10.0.0.1", 00:24:50.494 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:50.494 "adrfam": "ipv4", 00:24:50.494 "trsvcid": "4420", 00:24:50.494 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:50.494 "method": "bdev_nvme_attach_controller", 00:24:50.494 "req_id": 1 00:24:50.494 } 00:24:50.494 Got JSON-RPC error response 00:24:50.494 response: 00:24:50.494 { 00:24:50.494 "code": -32602, 00:24:50.494 "message": "Invalid parameters" 00:24:50.494 } 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:24:50.494 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.752 request: 00:24:50.752 { 00:24:50.752 "name": "nvme0", 00:24:50.752 "trtype": "tcp", 00:24:50.752 "traddr": "10.0.0.1", 00:24:50.752 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:50.752 "adrfam": "ipv4", 00:24:50.752 "trsvcid": "4420", 00:24:50.752 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:50.752 "dhchap_key": "key2", 00:24:50.752 "method": "bdev_nvme_attach_controller", 00:24:50.752 "req_id": 1 00:24:50.752 } 00:24:50.752 Got JSON-RPC error response 00:24:50.752 response: 00:24:50.752 { 00:24:50.752 "code": -32602, 00:24:50.752 "message": "Invalid parameters" 00:24:50.752 } 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.752 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:24:50.753 request: 00:24:50.753 { 00:24:50.753 "name": "nvme0", 00:24:50.753 "trtype": "tcp", 00:24:50.753 "traddr": "10.0.0.1", 00:24:50.753 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:50.753 "adrfam": "ipv4", 00:24:50.753 "trsvcid": "4420", 00:24:50.753 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:50.753 "dhchap_key": "key1", 00:24:50.753 "dhchap_ctrlr_key": "ckey2", 00:24:50.753 "method": "bdev_nvme_attach_controller", 00:24:50.753 "req_id": 1 00:24:50.753 } 00:24:50.753 Got JSON-RPC error response 00:24:50.753 response: 00:24:50.753 { 00:24:50.753 "code": -32602, 00:24:50.753 "message": "Invalid parameters" 00:24:50.753 } 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.753 rmmod nvme_tcp 00:24:50.753 rmmod nvme_fabrics 00:24:50.753 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 3700979 ']' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 3700979 ']' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3700979' 00:24:51.011 killing process with pid 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 3700979 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.011 00:06:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.541 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.541 00:06:53 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:53.541 00:06:53 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:53.541 00:06:53 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:24:53.541 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:53.542 00:06:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.823 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.823 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.195 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.195 00:06:58 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hJz /tmp/spdk.key-null.8bA /tmp/spdk.key-sha256.jjX /tmp/spdk.key-sha384.kUS /tmp/spdk.key-sha512.bpI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:58.195 00:06:58 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:01.474 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:01.474 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:01.474 00:25:01.474 real 0m52.817s 00:25:01.474 user 0m45.252s 00:25:01.474 sys 0m14.910s 00:25:01.474 00:07:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:01.474 00:07:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:25:01.474 ************************************ 00:25:01.474 END TEST nvmf_auth 00:25:01.474 ************************************ 00:25:01.474 00:07:01 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:25:01.474 00:07:01 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:01.474 00:07:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:01.474 00:07:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:01.474 00:07:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:01.474 ************************************ 00:25:01.474 START TEST nvmf_digest 00:25:01.474 ************************************ 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:01.474 * Looking for test storage... 00:25:01.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.474 00:07:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.029 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.030 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.030 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.030 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.030 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.030 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:08.289 00:25:08.289 --- 10.0.0.2 ping statistics --- 00:25:08.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.289 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:08.289 00:25:08.289 --- 10.0.0.1 ping statistics --- 00:25:08.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.289 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.289 ************************************ 00:25:08.289 START TEST nvmf_digest_clean 00:25:08.289 ************************************ 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3714832 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3714832 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3714832 ']' 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:08.289 00:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.289 [2024-05-15 00:07:08.813773] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:08.289 [2024-05-15 00:07:08.813815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.289 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.562 [2024-05-15 00:07:08.887450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.562 [2024-05-15 00:07:08.959982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.562 [2024-05-15 00:07:08.960018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.562 [2024-05-15 00:07:08.960027] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.562 [2024-05-15 00:07:08.960035] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.562 [2024-05-15 00:07:08.960042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.562 [2024-05-15 00:07:08.960066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.138 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.396 null0 00:25:09.396 [2024-05-15 00:07:09.747328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.396 [2024-05-15 00:07:09.771331] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:09.396 [2024-05-15 00:07:09.771567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3714928 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3714928 /var/tmp/bperf.sock 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3714928 ']' 00:25:09.396 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.397 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:09.397 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.397 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:09.397 00:07:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.397 [2024-05-15 00:07:09.827335] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:09.397 [2024-05-15 00:07:09.827385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714928 ] 00:25:09.397 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.397 [2024-05-15 00:07:09.896021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.397 [2024-05-15 00:07:09.970610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.329 00:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.894 nvme0n1 00:25:10.894 00:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.894 00:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.894 Running I/O for 2 seconds... 00:25:12.795 00:25:12.795 Latency(us) 00:25:12.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:12.795 nvme0n1 : 2.00 28215.89 110.22 0.00 0.00 4530.62 2385.51 16567.50 00:25:12.795 =================================================================================================================== 00:25:12.795 Total : 28215.89 110.22 0.00 0.00 4530.62 2385.51 16567.50 00:25:12.795 0 00:25:12.795 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.795 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:12.795 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.795 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.795 | select(.opcode=="crc32c") 00:25:12.795 | "\(.module_name) \(.executed)"' 00:25:12.795 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3714928 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3714928 ']' 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3714928 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.059 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3714928 00:25:13.060 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:13.060 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:13.060 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3714928' 00:25:13.060 killing process with pid 3714928 00:25:13.060 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3714928 00:25:13.060 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.060 00:25:13.060 Latency(us) 00:25:13.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.060 =================================================================================================================== 00:25:13.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.060 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3714928 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3715725 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3715725 /var/tmp/bperf.sock 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3715725 ']' 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.317 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.318 00:07:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:13.318 [2024-05-15 00:07:13.821595] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:13.318 [2024-05-15 00:07:13.821648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715725 ] 00:25:13.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.318 Zero copy mechanism will not be used. 00:25:13.318 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.318 [2024-05-15 00:07:13.891015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.575 [2024-05-15 00:07:13.960743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.140 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.140 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:14.140 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:14.140 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:14.140 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:14.398 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.398 00:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.656 nvme0n1 00:25:14.656 00:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:14.656 00:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.656 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:14.656 Zero copy mechanism will not be used. 00:25:14.656 Running I/O for 2 seconds... 00:25:17.184 00:25:17.184 Latency(us) 00:25:17.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.184 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:17.184 nvme0n1 : 2.00 3237.60 404.70 0.00 0.00 4939.03 4639.95 22439.53 00:25:17.184 =================================================================================================================== 00:25:17.184 Total : 3237.60 404.70 0.00 0.00 4939.03 4639.95 22439.53 00:25:17.184 0 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:17.184 | select(.opcode=="crc32c") 00:25:17.184 | "\(.module_name) \(.executed)"' 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3715725 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3715725 ']' 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3715725 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3715725 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3715725' 00:25:17.184 killing process with pid 3715725 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3715725 00:25:17.184 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.184 00:25:17.184 Latency(us) 00:25:17.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.184 =================================================================================================================== 00:25:17.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3715725 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3716284 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3716284 /var/tmp/bperf.sock 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3716284 ']' 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:17.184 00:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:17.184 [2024-05-15 00:07:17.697302] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:17.184 [2024-05-15 00:07:17.697353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716284 ] 00:25:17.184 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.185 [2024-05-15 00:07:17.766023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.442 [2024-05-15 00:07:17.842490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.008 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:18.008 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:18.008 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:18.008 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:18.008 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:18.266 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.266 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.524 nvme0n1 00:25:18.524 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:18.524 00:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:18.524 Running I/O for 2 seconds... 00:25:21.050 00:25:21.050 Latency(us) 00:25:21.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.050 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:21.050 nvme0n1 : 2.00 27684.27 108.14 0.00 0.00 4615.51 2700.08 25794.97 00:25:21.050 =================================================================================================================== 00:25:21.050 Total : 27684.27 108.14 0.00 0.00 4615.51 2700.08 25794.97 00:25:21.050 0 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:21.050 | select(.opcode=="crc32c") 00:25:21.050 | "\(.module_name) \(.executed)"' 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3716284 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3716284 ']' 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3716284 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3716284 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3716284' 00:25:21.050 killing process with pid 3716284 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3716284 00:25:21.050 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.050 00:25:21.050 Latency(us) 00:25:21.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.050 =================================================================================================================== 00:25:21.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3716284 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3716991 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3716991 /var/tmp/bperf.sock 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3716991 ']' 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.050 00:07:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.050 [2024-05-15 00:07:21.577725] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:21.050 [2024-05-15 00:07:21.577777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716991 ] 00:25:21.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.050 Zero copy mechanism will not be used. 00:25:21.050 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.307 [2024-05-15 00:07:21.647797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.308 [2024-05-15 00:07:21.722584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.872 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:21.872 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:21.872 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:21.872 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:21.872 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:22.130 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.130 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.391 nvme0n1 00:25:22.391 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:22.391 00:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:22.391 Zero copy mechanism will not be used. 00:25:22.391 Running I/O for 2 seconds... 00:25:24.947 00:25:24.947 Latency(us) 00:25:24.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.947 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:24.947 nvme0n1 : 2.01 2283.98 285.50 0.00 0.00 6992.89 5164.24 29569.84 00:25:24.947 =================================================================================================================== 00:25:24.947 Total : 2283.98 285.50 0.00 0.00 6992.89 5164.24 29569.84 00:25:24.947 0 00:25:24.947 00:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:24.947 00:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:24.947 00:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:24.947 00:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:24.947 00:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:24.947 | select(.opcode=="crc32c") 00:25:24.947 | "\(.module_name) \(.executed)"' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3716991 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3716991 ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3716991 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3716991 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3716991' 00:25:24.947 killing process with pid 3716991 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3716991 00:25:24.947 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.947 00:25:24.947 Latency(us) 00:25:24.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.947 =================================================================================================================== 00:25:24.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3716991 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3714832 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3714832 ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3714832 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3714832 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3714832' 00:25:24.947 killing process with pid 3714832 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3714832 00:25:24.947 [2024-05-15 00:07:25.461832] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:24.947 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3714832 00:25:25.221 00:25:25.221 real 0m16.907s 00:25:25.221 user 0m32.310s 00:25:25.221 sys 0m4.625s 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.221 ************************************ 00:25:25.221 END TEST nvmf_digest_clean 00:25:25.221 ************************************ 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:25.221 ************************************ 00:25:25.221 START TEST nvmf_digest_error 00:25:25.221 ************************************ 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3717661 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3717661 00:25:25.221 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3717661 ']' 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:25.222 00:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.480 [2024-05-15 00:07:25.815546] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:25.480 [2024-05-15 00:07:25.815597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.480 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.480 [2024-05-15 00:07:25.891567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.480 [2024-05-15 00:07:25.964427] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.480 [2024-05-15 00:07:25.964468] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.480 [2024-05-15 00:07:25.964477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.480 [2024-05-15 00:07:25.964486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.480 [2024-05-15 00:07:25.964494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.480 [2024-05-15 00:07:25.964516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.046 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:26.046 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:26.046 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.046 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.046 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.305 [2024-05-15 00:07:26.662566] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.305 null0 00:25:26.305 [2024-05-15 00:07:26.752321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.305 [2024-05-15 00:07:26.776308] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:26.305 [2024-05-15 00:07:26.776542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3717942 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3717942 /var/tmp/bperf.sock 00:25:26.305 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3717942 ']' 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:26.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.306 00:07:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:26.306 [2024-05-15 00:07:26.827556] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:26.306 [2024-05-15 00:07:26.827605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717942 ] 00:25:26.306 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.306 [2024-05-15 00:07:26.895539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.564 [2024-05-15 00:07:26.974089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.130 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:27.130 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:27.130 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.130 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.388 00:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.646 nvme0n1 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:27.646 00:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:27.646 Running I/O for 2 seconds... 00:25:27.646 [2024-05-15 00:07:28.219906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.646 [2024-05-15 00:07:28.219943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.646 [2024-05-15 00:07:28.219956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.646 [2024-05-15 00:07:28.229989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.647 [2024-05-15 00:07:28.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.647 [2024-05-15 00:07:28.230025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.238667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.238693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.238705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.247628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.247650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.247661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.256578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.256600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.256611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.265583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.265605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.265620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.274180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.274206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.274217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.284075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.284096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.284107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.292488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.292508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.292518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.301745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.301766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.301776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.309740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.309761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.906 [2024-05-15 00:07:28.309772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.906 [2024-05-15 00:07:28.318929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.906 [2024-05-15 00:07:28.318950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.318960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.328544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.328566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.328576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.337865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.337885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.345682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.345703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.345713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.355560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.355593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.364604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.364635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.372945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.372966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.372977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.382183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.382208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.382218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.391813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.391834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.391844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.399093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.399115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.399129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.409209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.409231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.409242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.417947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.417968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.417982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.427148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.427169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.427179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.435557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.435589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.444750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.444771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.444782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.453783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.453804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.461430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.461450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.461460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.471298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.471318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.471329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.480873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.480896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.480907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.907 [2024-05-15 00:07:28.488719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:27.907 [2024-05-15 00:07:28.488741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.907 [2024-05-15 00:07:28.488751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.164 [2024-05-15 00:07:28.498440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.164 [2024-05-15 00:07:28.498465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.164 [2024-05-15 00:07:28.498476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.164 [2024-05-15 00:07:28.507108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.164 [2024-05-15 00:07:28.507129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.164 [2024-05-15 00:07:28.507139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.164 [2024-05-15 00:07:28.517129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.164 [2024-05-15 00:07:28.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.164 [2024-05-15 00:07:28.517161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.164 [2024-05-15 00:07:28.525374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.164 [2024-05-15 00:07:28.525395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.164 [2024-05-15 00:07:28.525405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.164 [2024-05-15 00:07:28.534189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.164 [2024-05-15 00:07:28.534216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.164 [2024-05-15 00:07:28.534227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.543296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.543317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.543327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.552624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.552645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.552655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.561657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.561677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.561688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.569826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.569847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.569858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.579757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.579776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.579787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.587752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.587783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.596541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.596573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.605873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.605894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.605904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.615047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.615067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.615077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.623012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.623032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.623043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.632894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.632914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.632925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.641480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.641500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.641511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.650033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.650054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.659881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.659902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.667824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.667845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.667856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.677042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.677062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.677073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.685943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.685964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.685974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.694346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.694377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.703715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.703736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.703747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.712979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.713001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.713012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.721399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.721419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.721430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.730677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.730711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.740418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.740438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.740448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.165 [2024-05-15 00:07:28.747935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.165 [2024-05-15 00:07:28.747955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.165 [2024-05-15 00:07:28.747966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.758370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.758391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.758402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.766848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.766869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.766879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.775996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.776016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.776027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.784330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.784350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.784360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.794016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.794037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.794048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.801966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.801986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.801996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.812189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.812215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.812226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.820638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.820659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.820670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.829806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.829837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.838699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.838719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.838730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.847338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.847359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.847369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.856600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.856620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.856632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.864225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.864245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.864256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.873907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.873927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.423 [2024-05-15 00:07:28.873938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.423 [2024-05-15 00:07:28.883702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.423 [2024-05-15 00:07:28.883726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.883737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.891498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.891518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.891529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.901366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.901387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.901398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.909096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.909117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.909128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.919365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.919397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.927620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.927641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.927652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.937827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.937848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.937859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.946655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.946686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.955799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.955819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.955830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.965414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.965435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.965446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.973437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.973458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.973469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.983662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.983684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.983695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:28.991915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:28.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:28.991947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:29.001330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:29.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:29.001361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.424 [2024-05-15 00:07:29.009229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.424 [2024-05-15 00:07:29.009251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.424 [2024-05-15 00:07:29.009261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.018515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.018537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.018548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.027823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.027844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.027854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.037678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.037699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.037713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.045463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.045483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.045494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.055236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.055257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.055267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.063142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.063163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.072376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.072397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.072407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.081468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.081490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.081500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.090878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.090899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.090910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.098760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.098781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.098791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.683 [2024-05-15 00:07:29.108130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.683 [2024-05-15 00:07:29.108151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.683 [2024-05-15 00:07:29.108162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.117882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.117906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.117916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.125523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.125543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.125554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.134977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.134997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.135008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.143833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.143853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.143864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.152556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.152577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.161957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.161978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.161988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.170495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.170516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.170527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.179649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.179670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.179680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.188095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.188117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.188128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.197744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.197766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.197777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.206243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.206276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.214864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.214886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.214896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.224038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.224060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.224071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.233423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.233444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.233455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.242696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.242717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.242728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.251983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.252004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.252015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.259830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.259851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.259861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.684 [2024-05-15 00:07:29.268597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.684 [2024-05-15 00:07:29.268618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.684 [2024-05-15 00:07:29.268634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.277872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.277894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.277905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.287333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.287354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.287365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.295412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.295432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.295443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.304811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.304832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.304842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.313607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.313629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.313639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.322705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.322726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.322737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.331158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.331180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.331197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.943 [2024-05-15 00:07:29.340122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.943 [2024-05-15 00:07:29.340143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.943 [2024-05-15 00:07:29.340153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.349030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.349061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.358787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.358808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.358818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.367798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.367819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.367830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.375821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.375843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.375853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.385745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.385766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.385777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.394155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.394185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.402733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.402754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.402765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.411954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.411975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.411985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.420519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.420540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.420554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.428769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.428791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.428802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.438082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.438104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.438115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.448480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.448501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.448511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.457602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.457623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.457634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.465728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.465749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.465760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.475744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.475765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.475776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.483838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.483861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.483872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.492733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.492753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.492763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.502453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.502480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.502491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.511126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.511148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.511159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.520162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.520184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.944 [2024-05-15 00:07:29.529418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:28.944 [2024-05-15 00:07:29.529439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.944 [2024-05-15 00:07:29.529450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.538231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.538252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.538263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.547670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.556551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.556572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.565964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.565985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.565995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.573333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.573354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.573365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.584038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.584059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.584070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.592527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.592548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.203 [2024-05-15 00:07:29.592559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.203 [2024-05-15 00:07:29.600604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.203 [2024-05-15 00:07:29.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.600637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.609842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.609863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.609874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.619559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.619580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.619590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.627339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.627370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.636114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.636135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.636146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.645418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.645438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.645449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.654713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.654734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.654748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.663386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.663407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.663417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.672606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.672627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.672637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.681172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.681197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.681209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.689087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.689107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.689118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.699644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.699676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.707595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.707616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.707627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.717837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.717859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.717869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.726251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.726272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.726282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.735757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.735779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.735789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.744038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.744059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.754855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.754876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.754886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.765982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.766003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.766013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.775022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.775043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.775054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.783823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.783844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.783854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.204 [2024-05-15 00:07:29.793768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.204 [2024-05-15 00:07:29.793789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.204 [2024-05-15 00:07:29.793800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.803297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.803329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.803339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.812327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.812347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.821388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.821408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.821418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.831643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.831664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.831675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.842136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.842157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.842168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.852622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.852643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.852654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.862558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.862578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.862588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.872061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.872081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.872092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.883100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.883121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.463 [2024-05-15 00:07:29.883132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.463 [2024-05-15 00:07:29.892569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.463 [2024-05-15 00:07:29.892589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.892600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.901986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.902010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.902020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.912671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.912691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.912702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.921331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.921351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.921362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.930105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.930126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.930136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.939644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.939664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.939674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.948293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.948313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.948323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.958303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.958325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.958335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.966969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.966990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.967000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.982209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.982230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.982241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:29.991081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:29.991103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:29.991114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.000739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.000761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.000772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.009667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.009688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.009699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.020079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.020115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.029099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.029120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.038950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.038972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.038983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.464 [2024-05-15 00:07:30.047511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.464 [2024-05-15 00:07:30.047535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.464 [2024-05-15 00:07:30.047546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.728 [2024-05-15 00:07:30.056754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.728 [2024-05-15 00:07:30.056777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.728 [2024-05-15 00:07:30.056789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.728 [2024-05-15 00:07:30.066270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.728 [2024-05-15 00:07:30.066292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.728 [2024-05-15 00:07:30.066306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.728 [2024-05-15 00:07:30.075309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.728 [2024-05-15 00:07:30.075329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.728 [2024-05-15 00:07:30.075340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.728 [2024-05-15 00:07:30.084025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.084048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.084061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.093297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.093318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.093328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.102537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.102560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.102573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.112530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.112553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.112565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.121801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.121824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.121836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.130376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.130398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.130409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.142135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.142158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.142169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.154688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.154714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.154730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.162356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.162390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.172165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.172195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.172208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.182118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.182141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.182151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 [2024-05-15 00:07:30.190916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x103abb0) 00:25:29.729 [2024-05-15 00:07:30.190938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.729 [2024-05-15 00:07:30.190949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.729 00:25:29.729 Latency(us) 00:25:29.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.729 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:29.729 nvme0n1 : 2.00 27731.59 108.33 0.00 0.00 4611.18 2188.90 21915.24 00:25:29.729 =================================================================================================================== 00:25:29.729 Total : 27731.59 108.33 0.00 0.00 4611.18 2188.90 21915.24 00:25:29.729 0 00:25:29.729 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:29.729 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:29.729 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:29.729 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:29.729 | .driver_specific 00:25:29.729 | .nvme_error 00:25:29.729 | .status_code 00:25:29.729 | .command_transient_transport_error' 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3717942 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3717942 ']' 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3717942 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3717942 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3717942' 00:25:29.991 killing process with pid 3717942 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3717942 00:25:29.991 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.991 00:25:29.991 Latency(us) 00:25:29.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.991 =================================================================================================================== 00:25:29.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.991 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3717942 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3718496 00:25:30.249 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3718496 /var/tmp/bperf.sock 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3718496 ']' 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:30.250 00:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.250 [2024-05-15 00:07:30.685781] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:30.250 [2024-05-15 00:07:30.685835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718496 ] 00:25:30.250 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.250 Zero copy mechanism will not be used. 00:25:30.250 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.250 [2024-05-15 00:07:30.755915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.250 [2024-05-15 00:07:30.819807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.182 00:07:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.748 nvme0n1 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:31.748 00:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.748 Zero copy mechanism will not be used. 00:25:31.748 Running I/O for 2 seconds... 00:25:31.748 [2024-05-15 00:07:32.165500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.165535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.165548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.184718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.184746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.184757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.203051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.203073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.203085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.223060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.223081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.223092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.243228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.243249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.243263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.257751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.257771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.257782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.269809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.269832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.269843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.280271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.280293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.280304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.290735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.290757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.290768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.301526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.301548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.301559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.311861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.311882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.311892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.321943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.321963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.321974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.748 [2024-05-15 00:07:32.331707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:31.748 [2024-05-15 00:07:32.331728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.748 [2024-05-15 00:07:32.331738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.341521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.341546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.341557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.351249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.351271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.351281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.360931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.360951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.360962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.370575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.370607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.380253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.380275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.380285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.389904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.389926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.399722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.399744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.399754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.409355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.409377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.409388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.419123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.419144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.419154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.428812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.428834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.428845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.438472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.438493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.438503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.448095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.448126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.457718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.457738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.457748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.467337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.467358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.467368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.476998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.477019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.477030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.486611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.486632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.486642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.496250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.496271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.496282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.505873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.505898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.505909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.515582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.515603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.007 [2024-05-15 00:07:32.515614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.007 [2024-05-15 00:07:32.525235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.007 [2024-05-15 00:07:32.525255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.525266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.534819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.534840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.534850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.544636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.544657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.544667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.554320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.554340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.554350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.564046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.564068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.564078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.573640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.573663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.573674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.583186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.583216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.583227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.008 [2024-05-15 00:07:32.592728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.008 [2024-05-15 00:07:32.592749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.008 [2024-05-15 00:07:32.592759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.602357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.602379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.602390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.611903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.611924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.611934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.621445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.621466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.621477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.631103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.631124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.631134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.640738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.640760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.640769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.650347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.650368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.650378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.659915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.659946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.669489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.669511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.669525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.679070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.679090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.679101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.688686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.688708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.688719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.698279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.698300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.698310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.707965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.707986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.707997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.717578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.717599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.717609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.727295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.727326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.727336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.736967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.736988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.736998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.746587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.746608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.746619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.756226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.756250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.765885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.765906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.765916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.775516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.775536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.775546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.785131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.785153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.785163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.794863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.794885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.794898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.804447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.804469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.804479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.814032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.814053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.814064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.823673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.823693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.823704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.833312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.267 [2024-05-15 00:07:32.833335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.267 [2024-05-15 00:07:32.833349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.267 [2024-05-15 00:07:32.842931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.268 [2024-05-15 00:07:32.842953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.268 [2024-05-15 00:07:32.842964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.268 [2024-05-15 00:07:32.852665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.268 [2024-05-15 00:07:32.852686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.268 [2024-05-15 00:07:32.852697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.862372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.862394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.862405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.872160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.872181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.872197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.881838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.881869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.891528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.891550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.891560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.901173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.901201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.901212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.910835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.910857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.910868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.920493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.920520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.920530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.930173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.930198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.930209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.939819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.939840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.939850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.949522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.949543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.949554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.959227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.959248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.959258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.527 [2024-05-15 00:07:32.968906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.527 [2024-05-15 00:07:32.968926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.527 [2024-05-15 00:07:32.968936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:32.978661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:32.978681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:32.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:32.988332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:32.988353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:32.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:32.997973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:32.997994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:32.998004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.007659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.007681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.007690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.017330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.017350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.017360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.027042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.027063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.027073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.036683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.036704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.036714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.046305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.046326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.046336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.055932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.055956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.055968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.065555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.065576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.065587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.075207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.075228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.075238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.084929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.084950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.084963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.094663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.094684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.094694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.104338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.104359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.104369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.528 [2024-05-15 00:07:33.114058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.528 [2024-05-15 00:07:33.114079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.528 [2024-05-15 00:07:33.114090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.123865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.123886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.123896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.133580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.133601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.143304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.143325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.143335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.152931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.152952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.152962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.162653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.162674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.162684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.172307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.172327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.182006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.182028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.191690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.191711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.191721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.201343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.201373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.211244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.211264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.211275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.220976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.220996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.221006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.230597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.230617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.787 [2024-05-15 00:07:33.230627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.787 [2024-05-15 00:07:33.240418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.787 [2024-05-15 00:07:33.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.240448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.250055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.250076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.250089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.259803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.259824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.259834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.269449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.269470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.269480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.279097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.279118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.279128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.288729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.288750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.288760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.298390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.298421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.308014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.308034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.308044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.317666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.317687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.317697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.327296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.327316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.327326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.336947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.336971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.336981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.346618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.346638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.346648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.356282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.356302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.365902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.365923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.365933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.788 [2024-05-15 00:07:33.375535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:32.788 [2024-05-15 00:07:33.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.788 [2024-05-15 00:07:33.375565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.385224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.385245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.385255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.394912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.394933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.404584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.404605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.414479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.414500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.414510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.424053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.424074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.424084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.433812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.433833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.443465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.443485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.443495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.453126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.453147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.453157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.462725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.462745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.462755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.472387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.472407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.472417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.482053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.482084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.491801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.491822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.491832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.501440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.501461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.501475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.511099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.511121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.511131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.520756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.520777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.520788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.530412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.047 [2024-05-15 00:07:33.530433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.047 [2024-05-15 00:07:33.530443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.047 [2024-05-15 00:07:33.540174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.540201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.540212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.549772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.549792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.549802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.559325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.559346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.559356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.568860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.568880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.568890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.578466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.578487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.578496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.588016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.588037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.588048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.597576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.597608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.607168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.607197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.616779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.616799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.616809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.626385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.626406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.626416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.048 [2024-05-15 00:07:33.635971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.048 [2024-05-15 00:07:33.635992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.048 [2024-05-15 00:07:33.636003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.307 [2024-05-15 00:07:33.645589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.307 [2024-05-15 00:07:33.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-05-15 00:07:33.645620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.307 [2024-05-15 00:07:33.655173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.307 [2024-05-15 00:07:33.655200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-05-15 00:07:33.655210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.307 [2024-05-15 00:07:33.664780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.307 [2024-05-15 00:07:33.664801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-05-15 00:07:33.664815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.307 [2024-05-15 00:07:33.674364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.307 [2024-05-15 00:07:33.674384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-05-15 00:07:33.674394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.307 [2024-05-15 00:07:33.684050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.307 [2024-05-15 00:07:33.684072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.684083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.693695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.693715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.693726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.703293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.703314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.703324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.712988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.713009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.713020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.722664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.722685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.722695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.732303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.732323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.732333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.741934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.741954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.751514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.751537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.751547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.761103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.761124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.761134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.770652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.770672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.780281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.789865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.789886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.799537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.799558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.799568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.809128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.809148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.809158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.818679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.818700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.828336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.828356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.828367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.837897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.837919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.837929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.847472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.847493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.857035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.857056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.857066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.866573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.866593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.876068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.876089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.876099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.885587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.885607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.885618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.308 [2024-05-15 00:07:33.895112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.308 [2024-05-15 00:07:33.895133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-05-15 00:07:33.895143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.904698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.904719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.904729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.914337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.914357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.914371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.923894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.923916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.933528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.933558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.943151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.943173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.943183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.952769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.952792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.952802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.962418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.962440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.962450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.971992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.972014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.972024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.981613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.981634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:33.991293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:33.991315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:33.991326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.567 [2024-05-15 00:07:34.000867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.567 [2024-05-15 00:07:34.000888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.567 [2024-05-15 00:07:34.000898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.010444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.010465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.020010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.020031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.020041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.029573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.029594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.029604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.039157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.039178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.039188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.048908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.048939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.058518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.058538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.058549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.068057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.068078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.077607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.077629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.077642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.087153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.087174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.087184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.096798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.096819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.096830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.106438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.106459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.106469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.116125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.116146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.116156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.125660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.125681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.125691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.135332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.135353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.135363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.568 [2024-05-15 00:07:34.144788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5fa00) 00:25:33.568 [2024-05-15 00:07:34.144809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.568 [2024-05-15 00:07:34.144819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.568 00:25:33.568 Latency(us) 00:25:33.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.568 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:33.568 nvme0n1 : 2.00 3111.36 388.92 0.00 0.00 5139.09 4666.16 20761.80 00:25:33.568 =================================================================================================================== 00:25:33.568 Total : 3111.36 388.92 0.00 0.00 5139.09 4666.16 20761.80 00:25:33.568 0 00:25:33.827 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:33.828 | .driver_specific 00:25:33.828 | .nvme_error 00:25:33.828 | .status_code 00:25:33.828 | .command_transient_transport_error' 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3718496 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3718496 ']' 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3718496 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718496 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718496' 00:25:33.828 killing process with pid 3718496 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3718496 00:25:33.828 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.828 00:25:33.828 Latency(us) 00:25:33.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.828 =================================================================================================================== 00:25:33.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.828 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3718496 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3719285 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3719285 /var/tmp/bperf.sock 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3719285 ']' 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:34.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.087 00:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.087 [2024-05-15 00:07:34.642554] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:34.087 [2024-05-15 00:07:34.642612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719285 ] 00:25:34.087 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.346 [2024-05-15 00:07:34.712424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.346 [2024-05-15 00:07:34.780499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.914 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:34.914 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:34.914 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:34.914 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.173 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.432 nvme0n1 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:35.432 00:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:35.432 Running I/O for 2 seconds... 00:25:35.432 [2024-05-15 00:07:35.947721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:35.432 [2024-05-15 00:07:35.948532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.948563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:35.956946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:35.957168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.957199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:35.966185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:35.966401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.966426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:35.975447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:35.975666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.975686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:35.984603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:35.984829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.984850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:35.993818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:35.994062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:35.994083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:36.002980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:36.003214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:36.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:36.012176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:36.012421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:36.012441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.432 [2024-05-15 00:07:36.021460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.432 [2024-05-15 00:07:36.021685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.432 [2024-05-15 00:07:36.021705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.030753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.030976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.030996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.039938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.040160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.040180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.049063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.049289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.049308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.058213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.058432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.058451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.067334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.067575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.067595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.076505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.076723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.076742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.085554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.085777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.085797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.094685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.094904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.094924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.103808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.104025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.104045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.112969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.113209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.113229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.122119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.122353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.122373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.131255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.131495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.131515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.140439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.140677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.140696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.149627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.149848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.149867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.158713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.158933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.158952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.167798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.168013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.168033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.176917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.177171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.186023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.186241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.186261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.195120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.195346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.195366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.204220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.204455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.204478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.213707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.213929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.213950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.222976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.223216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.223236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.232227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.232452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.232473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.241446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.241713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.241733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.691 [2024-05-15 00:07:36.250522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.691 [2024-05-15 00:07:36.251421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.691 [2024-05-15 00:07:36.251441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.692 [2024-05-15 00:07:36.259751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.692 [2024-05-15 00:07:36.259999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.692 [2024-05-15 00:07:36.260019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.692 [2024-05-15 00:07:36.268893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.692 [2024-05-15 00:07:36.269104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.692 [2024-05-15 00:07:36.269123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.692 [2024-05-15 00:07:36.278033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.692 [2024-05-15 00:07:36.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.692 [2024-05-15 00:07:36.278263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.287354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.288035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.288054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.296464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.297610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.297631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.305559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.305780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.305800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.314708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.315078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.315097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.323851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.324304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.324324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.333002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f92c0 00:25:35.950 [2024-05-15 00:07:36.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.333345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.342774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fc560 00:25:35.950 [2024-05-15 00:07:36.343691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.352099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.950 [2024-05-15 00:07:36.352350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.950 [2024-05-15 00:07:36.352371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.950 [2024-05-15 00:07:36.361247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.361453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.361474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.370402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.370754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.370774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.379494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.379801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.379821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.388552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.388721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.388740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.397707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.397897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.397917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.406802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.407001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.407021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.415957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.416290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.416310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.425100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.425289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.425309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.434234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.434570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.434590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.443302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.443489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.452386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.452799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.452819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.461514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.461908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.461928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.470801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.471703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.471723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.480096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.480350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.480370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.489321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.489525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.489545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.498611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.499096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.499116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.507790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.508015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.508034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.516906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.517101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.517127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.525973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.526279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.526299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.951 [2024-05-15 00:07:36.535119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:35.951 [2024-05-15 00:07:36.535520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.951 [2024-05-15 00:07:36.535539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.544422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.544795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.544815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.553571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.554326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.554356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.562729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.562951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.562970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.571817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.572011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.572035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.580966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.581151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.581169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.590068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.591642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.591663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.600590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.600862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.600882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.609736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.609981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.610002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.618851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.619084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.619104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.627965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.628363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.628383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.637105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.637340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.646215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.646688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.646706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.655318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.655530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.655550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.664403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.210 [2024-05-15 00:07:36.664623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.210 [2024-05-15 00:07:36.664643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.210 [2024-05-15 00:07:36.673473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.673712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.673731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.682598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.682858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.682881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.691768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.700796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.701038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.701057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.710156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.710405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.719316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.719536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.719555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.728658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.729029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.729049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.737983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.738349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.738368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.747250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.747590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.747610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.756464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f8a50 00:25:36.211 [2024-05-15 00:07:36.756918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.756937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.765621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.211 [2024-05-15 00:07:36.767096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.776023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.211 [2024-05-15 00:07:36.776898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.776918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.785171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.211 [2024-05-15 00:07:36.785661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.785680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.211 [2024-05-15 00:07:36.794309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.211 [2024-05-15 00:07:36.794673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.211 [2024-05-15 00:07:36.794693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.803592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.804237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.812898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.813157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.813177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.822224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.822465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.822484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.831502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.831757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.831776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.841028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.841273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.841296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.850245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.850495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.850515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.859455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.859698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.859720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.868531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.868773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.868793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.877622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.877858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.877878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.886785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.887232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.887253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.895918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.896136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.896155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.905000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.905236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.905256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.914131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.914397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.914416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.923283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.923522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.923541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.932404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.932681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.941495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fcdd0 00:25:36.508 [2024-05-15 00:07:36.942875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.953561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fb480 00:25:36.508 [2024-05-15 00:07:36.954466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.954488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.962808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fc560 00:25:36.508 [2024-05-15 00:07:36.963791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.963812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.971640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f4f40 00:25:36.508 [2024-05-15 00:07:36.972740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.972761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.980632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190eee38 00:25:36.508 [2024-05-15 00:07:36.981728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.981749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.989558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fc560 00:25:36.508 [2024-05-15 00:07:36.990640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.990661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:36.998390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f4f40 00:25:36.508 [2024-05-15 00:07:36.999386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:36.999407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.007174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190eee38 00:25:36.508 [2024-05-15 00:07:37.008176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.008203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.015935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fc560 00:25:36.508 [2024-05-15 00:07:37.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.017026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.024632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f4f40 00:25:36.508 [2024-05-15 00:07:37.026289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.026307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.033495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f6cc8 00:25:36.508 [2024-05-15 00:07:37.034469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.034489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.042037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f3e60 00:25:36.508 [2024-05-15 00:07:37.042899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.042919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.050768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f4f40 00:25:36.508 [2024-05-15 00:07:37.051665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.051684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.059574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190f6020 00:25:36.508 [2024-05-15 00:07:37.060474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.060494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.068255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fb8b8 00:25:36.508 [2024-05-15 00:07:37.069138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.069157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.077035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.508 [2024-05-15 00:07:37.078872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.078891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:36.508 [2024-05-15 00:07:37.088166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fc560 00:25:36.508 [2024-05-15 00:07:37.089184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.508 [2024-05-15 00:07:37.089209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.097325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.097637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.097657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.106824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.107062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.107081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.116154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.116366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.116387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.125543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.125726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.125747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.134915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.135104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.135123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.144203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.144409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.144428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.153487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.153689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.153709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.162690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.162884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.162903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.171854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.172049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.172068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.181008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.181202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.181237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.190166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.190381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.190402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.199426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.199617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.199645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.208802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.208998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.209019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.218010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.218218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.218238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.227255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.227449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.227468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.236654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.236866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.236887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.245936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.246144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.246167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.255275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.255467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.255488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.264431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.264639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.264658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.273616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.273799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.273817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.282981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.283172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.292330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.292526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.292546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.301662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.301859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.301879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.311060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.311259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.311278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.320653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.320853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.320873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.330548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.330757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.330776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.340021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.340222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.340241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.349382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.349575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.349598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.774 [2024-05-15 00:07:37.358758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:36.774 [2024-05-15 00:07:37.358949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.774 [2024-05-15 00:07:37.358981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.033 [2024-05-15 00:07:37.368096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.033 [2024-05-15 00:07:37.368285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.033 [2024-05-15 00:07:37.368306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.033 [2024-05-15 00:07:37.377442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.033 [2024-05-15 00:07:37.377648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.033 [2024-05-15 00:07:37.377668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.033 [2024-05-15 00:07:37.386758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.033 [2024-05-15 00:07:37.386950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.033 [2024-05-15 00:07:37.386978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.033 [2024-05-15 00:07:37.395918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.033 [2024-05-15 00:07:37.396110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.396138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.405032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.405223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.405242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.414155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.414367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.414388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.423359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.423548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.423566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.432455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.432663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.441650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.441842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.441868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.450751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.450957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.450976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.459923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.460111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.460129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.468999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.469188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.478157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.478372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.478391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.487498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.487704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.487729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.496774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.496983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.497003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.506071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.506280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.506299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.515307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.515496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.515514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.524437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.524625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.524643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.533644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.533842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.533867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.542841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.543044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.543064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.552029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.552235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.552254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.561247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.561441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.561460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.570596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.570792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.570815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.579879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.580069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.580090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.589049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.589245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.589264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.598206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.598411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.598431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.607336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.607523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.607542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.034 [2024-05-15 00:07:37.616427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.034 [2024-05-15 00:07:37.616620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.034 [2024-05-15 00:07:37.616657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.293 [2024-05-15 00:07:37.625746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.293 [2024-05-15 00:07:37.625937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.293 [2024-05-15 00:07:37.625957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.293 [2024-05-15 00:07:37.635076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.293 [2024-05-15 00:07:37.635273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.293 [2024-05-15 00:07:37.635292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.293 [2024-05-15 00:07:37.644155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.293 [2024-05-15 00:07:37.644351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.293 [2024-05-15 00:07:37.644379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.293 [2024-05-15 00:07:37.653326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.293 [2024-05-15 00:07:37.653525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.293 [2024-05-15 00:07:37.653544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.293 [2024-05-15 00:07:37.662457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.293 [2024-05-15 00:07:37.662644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.293 [2024-05-15 00:07:37.662663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.671679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.671870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.671897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.680823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.681011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.681030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.689942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.690132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.690150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.699103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.699307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.699326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.708251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.708458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.717325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.717534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.717554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.726564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.726772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.735746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.735934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.745078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.745269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.754433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.763685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.763892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.763912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.772875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.773065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.773092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.781989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.782177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.782199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.791060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.791268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.800166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.800376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.800396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.809321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.809511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.818398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.818583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.827509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.827696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.827717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.836749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.836938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.836967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.846078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.846267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.846287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.855218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.855412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.855432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.864274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.864479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.864498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.873448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.873636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.294 [2024-05-15 00:07:37.882612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.294 [2024-05-15 00:07:37.882805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.294 [2024-05-15 00:07:37.882827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.553 [2024-05-15 00:07:37.891917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.553 [2024-05-15 00:07:37.892107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.553 [2024-05-15 00:07:37.892126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.554 [2024-05-15 00:07:37.901074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.554 [2024-05-15 00:07:37.901281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.554 [2024-05-15 00:07:37.901300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.554 [2024-05-15 00:07:37.910217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.554 [2024-05-15 00:07:37.910408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.554 [2024-05-15 00:07:37.910435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.554 [2024-05-15 00:07:37.919288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6c6e0) with pdu=0x2000190fe720 00:25:37.554 [2024-05-15 00:07:37.919473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.554 [2024-05-15 00:07:37.919492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:37.554 00:25:37.554 Latency(us) 00:25:37.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.554 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:37.554 nvme0n1 : 2.00 27444.26 107.20 0.00 0.00 4655.83 2844.26 25165.82 00:25:37.554 =================================================================================================================== 00:25:37.554 Total : 27444.26 107.20 0.00 0.00 4655.83 2844.26 25165.82 00:25:37.554 0 00:25:37.554 00:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:37.554 00:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:37.554 00:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:37.554 | .driver_specific 00:25:37.554 | .nvme_error 00:25:37.554 | .status_code 00:25:37.554 | .command_transient_transport_error' 00:25:37.554 00:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3719285 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3719285 ']' 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3719285 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:37.554 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3719285 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3719285' 00:25:37.813 killing process with pid 3719285 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3719285 00:25:37.813 Received shutdown signal, test time was about 2.000000 seconds 00:25:37.813 00:25:37.813 Latency(us) 00:25:37.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.813 =================================================================================================================== 00:25:37.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3719285 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3719841 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3719841 /var/tmp/bperf.sock 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3719841 ']' 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.813 00:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:38.072 [2024-05-15 00:07:38.415072] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:38.072 [2024-05-15 00:07:38.415121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719841 ] 00:25:38.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.072 Zero copy mechanism will not be used. 00:25:38.072 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.072 [2024-05-15 00:07:38.484513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.072 [2024-05-15 00:07:38.559557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.641 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:38.641 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:38.641 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.641 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.900 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.159 nvme0n1 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:39.159 00:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.418 Zero copy mechanism will not be used. 00:25:39.418 Running I/O for 2 seconds... 00:25:39.418 [2024-05-15 00:07:39.852076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.852599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.852628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.866694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.867087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.867111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.880440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.880857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.880879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.892453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.892908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.892930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.905541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.905962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.905985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.918965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.919224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.919246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.933656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.934361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.934383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.947204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.947801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.947822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.959648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.418 [2024-05-15 00:07:39.960212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.418 [2024-05-15 00:07:39.960232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.418 [2024-05-15 00:07:39.971116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.419 [2024-05-15 00:07:39.971649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.419 [2024-05-15 00:07:39.971670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.419 [2024-05-15 00:07:39.984760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.419 [2024-05-15 00:07:39.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.419 [2024-05-15 00:07:39.985376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.419 [2024-05-15 00:07:39.998272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.419 [2024-05-15 00:07:39.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.419 [2024-05-15 00:07:39.998663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.011047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.011620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.011645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.024506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.024915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.024938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.038372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.038779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.038806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.051952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.052542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.064816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.065406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.065427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.078285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.078922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.678 [2024-05-15 00:07:40.078943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.678 [2024-05-15 00:07:40.091575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.678 [2024-05-15 00:07:40.092147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.092171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.105450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.106006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.106027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.118806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.119263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.119284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.132949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.133615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.133636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.146397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.146990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.147010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.159803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.160424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.160445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.173369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.174012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.188477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.189146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.189166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.202873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.203407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.203428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.215891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.216493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.216513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.229046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.229662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.229683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.242515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.242946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.242966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.255236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.679 [2024-05-15 00:07:40.255864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.679 [2024-05-15 00:07:40.255884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.679 [2024-05-15 00:07:40.269759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.270386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.270414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.283626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.284178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.284207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.296693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.297321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.297343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.310112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.310582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.310603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.323588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.324132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.324153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.337048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.337557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.337577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.349738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.350318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.350339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.362845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.363462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.363483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.375771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.376268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.376289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.389250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.389761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.389783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.403856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.404722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.404743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.418136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.418683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.418704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.430392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.430947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.430968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.443339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.443831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.443851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.456397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.457011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.457032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.470782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.471282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.471303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.485123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.485697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.485718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.498491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.498933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.498955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.512019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.512674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.938 [2024-05-15 00:07:40.525615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:39.938 [2024-05-15 00:07:40.526128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.938 [2024-05-15 00:07:40.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.539608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.540233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.540254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.553990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.554625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.567667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.568127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.568148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.581331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.581845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.581866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.595534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.596110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.609275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.609904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.609925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.623274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.623874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.623899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.636646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.637130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.637151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.650923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.651478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.651499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.664347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.665066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.678028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.678352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.678372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.691582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.692196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.692217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.706672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.707306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.707327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.722159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.722739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.735166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.735779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.735800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.747182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.747729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.747750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.760812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.761362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.761383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.773657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.774267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.774288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.198 [2024-05-15 00:07:40.786485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.198 [2024-05-15 00:07:40.787034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.198 [2024-05-15 00:07:40.787054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.799619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.800060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.457 [2024-05-15 00:07:40.800081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.812302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.812810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.457 [2024-05-15 00:07:40.812830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.826082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.826654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.457 [2024-05-15 00:07:40.826675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.840401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.840865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.457 [2024-05-15 00:07:40.840885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.854236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.854901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.457 [2024-05-15 00:07:40.854922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.457 [2024-05-15 00:07:40.868492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.457 [2024-05-15 00:07:40.869041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.869062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.882138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.882662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.882683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.896968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.897529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.897552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.911362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.911906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.911926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.924981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.925387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.925408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.939162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.939646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.939666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.953541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.954032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.954053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.965828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.966461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.966482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.979761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.980237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.980265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:40.993276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:40.993924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:40.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:41.006749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:41.007382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:41.007402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:41.021271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:41.021773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:41.021793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:41.035302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.458 [2024-05-15 00:07:41.035830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.458 [2024-05-15 00:07:41.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.458 [2024-05-15 00:07:41.048468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.048949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.060966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.061569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.061589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.076300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.076983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.077003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.091206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.091693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.091714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.105053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.105521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.105543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.118831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.119319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.119339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.131806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.132384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.132405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.145691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.146389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.160776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.161344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.161365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.175101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.175681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.175701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.188945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.189592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.189612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.202944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.203468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.203488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.216222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.216821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.216841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.230682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.231172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.231196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.245355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.245782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.245802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.718 [2024-05-15 00:07:41.259094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.718 [2024-05-15 00:07:41.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.718 [2024-05-15 00:07:41.259697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.719 [2024-05-15 00:07:41.274267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.719 [2024-05-15 00:07:41.274865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.719 [2024-05-15 00:07:41.274887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.719 [2024-05-15 00:07:41.289326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.719 [2024-05-15 00:07:41.289835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.719 [2024-05-15 00:07:41.289855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.719 [2024-05-15 00:07:41.303792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.719 [2024-05-15 00:07:41.304359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.719 [2024-05-15 00:07:41.304381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.317636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.318176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.318202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.331500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.332067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.332087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.345113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.345672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.345696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.359730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.360248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.360268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.374265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.374807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.374827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.388917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.389434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.402370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.402858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.402879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.415387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.415727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.415747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.428807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.429380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.429401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.443539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.443889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.443910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.457488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.457982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.458003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.471742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.472215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.472253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.485881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.486571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.486592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.498568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.499176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.499202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.512199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.512844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.512864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.526761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.527370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.527392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.541880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.979 [2024-05-15 00:07:41.556734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:40.979 [2024-05-15 00:07:41.557372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.979 [2024-05-15 00:07:41.557393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.571241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.571763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.571783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.584628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.585098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.585118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.598132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.598765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.598786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.612639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.613106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.613127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.626655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.627165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.627185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.640827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.641350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.641370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.240 [2024-05-15 00:07:41.656204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.240 [2024-05-15 00:07:41.656692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.240 [2024-05-15 00:07:41.656713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.670101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.670610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.670630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.684529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.685013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.685033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.697188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.697675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.697696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.711563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.712142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.725588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.726090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.726110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.740414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.740877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.740898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.755156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.768933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.769365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.769386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.782326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.782763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.795729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.796145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.796166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.810175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.810644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.810665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.241 [2024-05-15 00:07:41.824197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd6cb50) with pdu=0x2000190fef90 00:25:41.241 [2024-05-15 00:07:41.824670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.241 [2024-05-15 00:07:41.824690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.241 00:25:41.241 Latency(us) 00:25:41.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:41.241 nvme0n1 : 2.01 2227.30 278.41 0.00 0.00 7168.28 4666.16 26319.26 00:25:41.241 =================================================================================================================== 00:25:41.241 Total : 2227.30 278.41 0.00 0.00 7168.28 4666.16 26319.26 00:25:41.241 0 00:25:41.501 00:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:41.501 00:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:41.501 00:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:41.501 00:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:41.501 | .driver_specific 00:25:41.501 | .nvme_error 00:25:41.501 | .status_code 00:25:41.501 | .command_transient_transport_error' 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3719841 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3719841 ']' 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3719841 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3719841 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3719841' 00:25:41.501 killing process with pid 3719841 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3719841 00:25:41.501 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.501 00:25:41.501 Latency(us) 00:25:41.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.501 =================================================================================================================== 00:25:41.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.501 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3719841 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3717661 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3717661 ']' 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3717661 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3717661 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3717661' 00:25:41.760 killing process with pid 3717661 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3717661 00:25:41.760 [2024-05-15 00:07:42.328332] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:41.760 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3717661 00:25:42.018 00:25:42.018 real 0m16.770s 00:25:42.018 user 0m31.917s 00:25:42.018 sys 0m4.577s 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.018 ************************************ 00:25:42.018 END TEST nvmf_digest_error 00:25:42.018 ************************************ 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.018 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:42.019 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.019 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.019 rmmod nvme_tcp 00:25:42.019 rmmod nvme_fabrics 00:25:42.019 rmmod nvme_keyring 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3717661 ']' 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3717661 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3717661 ']' 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3717661 00:25:42.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3717661) - No such process 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3717661 is not found' 00:25:42.278 Process with pid 3717661 is not found 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.278 00:07:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.183 00:07:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.183 00:25:44.183 real 0m42.840s 00:25:44.183 user 1m6.175s 00:25:44.183 sys 0m14.461s 00:25:44.183 00:07:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:44.183 00:07:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:44.183 ************************************ 00:25:44.183 END TEST nvmf_digest 00:25:44.183 ************************************ 00:25:44.183 00:07:44 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:25:44.183 00:07:44 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:25:44.183 00:07:44 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:25:44.183 00:07:44 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:44.183 00:07:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:44.183 00:07:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:44.183 00:07:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.183 ************************************ 00:25:44.183 START TEST nvmf_bdevperf 00:25:44.183 ************************************ 00:25:44.183 00:07:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:44.442 * Looking for test storage... 00:25:44.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.442 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.443 00:07:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:51.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:51.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:51.011 Found net devices under 0000:af:00.0: cvl_0_0 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:51.011 Found net devices under 0000:af:00.1: cvl_0_1 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.011 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:25:51.271 00:25:51.271 --- 10.0.0.2 ping statistics --- 00:25:51.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.271 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:25:51.271 00:25:51.271 --- 10.0.0.1 ping statistics --- 00:25:51.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.271 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3724219 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3724219 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3724219 ']' 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.271 00:07:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:51.271 [2024-05-15 00:07:51.836962] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:51.271 [2024-05-15 00:07:51.837007] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.530 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.530 [2024-05-15 00:07:51.909437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:51.530 [2024-05-15 00:07:51.983670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.530 [2024-05-15 00:07:51.983711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.530 [2024-05-15 00:07:51.983721] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.530 [2024-05-15 00:07:51.983730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.530 [2024-05-15 00:07:51.983737] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.530 [2024-05-15 00:07:51.983841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.530 [2024-05-15 00:07:51.983944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.530 [2024-05-15 00:07:51.983946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.097 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.097 [2024-05-15 00:07:52.688653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.357 Malloc0 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.357 [2024-05-15 00:07:52.747657] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:52.357 [2024-05-15 00:07:52.747913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:52.357 { 00:25:52.357 "params": { 00:25:52.357 "name": "Nvme$subsystem", 00:25:52.357 "trtype": "$TEST_TRANSPORT", 00:25:52.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.357 "adrfam": "ipv4", 00:25:52.357 "trsvcid": "$NVMF_PORT", 00:25:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.357 "hdgst": ${hdgst:-false}, 00:25:52.357 "ddgst": ${ddgst:-false} 00:25:52.357 }, 00:25:52.357 "method": "bdev_nvme_attach_controller" 00:25:52.357 } 00:25:52.357 EOF 00:25:52.357 )") 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:52.357 00:07:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:52.357 "params": { 00:25:52.357 "name": "Nvme1", 00:25:52.357 "trtype": "tcp", 00:25:52.357 "traddr": "10.0.0.2", 00:25:52.357 "adrfam": "ipv4", 00:25:52.357 "trsvcid": "4420", 00:25:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:52.357 "hdgst": false, 00:25:52.357 "ddgst": false 00:25:52.357 }, 00:25:52.357 "method": "bdev_nvme_attach_controller" 00:25:52.357 }' 00:25:52.357 [2024-05-15 00:07:52.797859] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:52.357 [2024-05-15 00:07:52.797905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724376 ] 00:25:52.357 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.357 [2024-05-15 00:07:52.867900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.357 [2024-05-15 00:07:52.939647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.616 Running I/O for 1 seconds... 00:25:53.993 00:25:53.993 Latency(us) 00:25:53.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:53.993 Verification LBA range: start 0x0 length 0x4000 00:25:53.993 Nvme1n1 : 1.01 11522.03 45.01 0.00 0.00 11057.05 2451.05 22754.10 00:25:53.993 =================================================================================================================== 00:25:53.993 Total : 11522.03 45.01 0.00 0.00 11057.05 2451.05 22754.10 00:25:53.993 00:07:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3724653 00:25:53.993 00:07:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:53.994 { 00:25:53.994 "params": { 00:25:53.994 "name": "Nvme$subsystem", 00:25:53.994 "trtype": "$TEST_TRANSPORT", 00:25:53.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.994 "adrfam": "ipv4", 00:25:53.994 "trsvcid": "$NVMF_PORT", 00:25:53.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.994 "hdgst": ${hdgst:-false}, 00:25:53.994 "ddgst": ${ddgst:-false} 00:25:53.994 }, 00:25:53.994 "method": "bdev_nvme_attach_controller" 00:25:53.994 } 00:25:53.994 EOF 00:25:53.994 )") 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:53.994 00:07:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:53.994 "params": { 00:25:53.994 "name": "Nvme1", 00:25:53.994 "trtype": "tcp", 00:25:53.994 "traddr": "10.0.0.2", 00:25:53.994 "adrfam": "ipv4", 00:25:53.994 "trsvcid": "4420", 00:25:53.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.994 "hdgst": false, 00:25:53.994 "ddgst": false 00:25:53.994 }, 00:25:53.994 "method": "bdev_nvme_attach_controller" 00:25:53.994 }' 00:25:53.994 [2024-05-15 00:07:54.441424] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:53.994 [2024-05-15 00:07:54.441478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724653 ] 00:25:53.994 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.994 [2024-05-15 00:07:54.510260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.994 [2024-05-15 00:07:54.575126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.273 Running I/O for 15 seconds... 00:25:56.840 00:07:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3724219 00:25:56.840 00:07:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:56.840 [2024-05-15 00:07:57.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.414957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.414979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.414992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.840 [2024-05-15 00:07:57.415316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.840 [2024-05-15 00:07:57.415327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.415989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.841 [2024-05-15 00:07:57.416118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.841 [2024-05-15 00:07:57.416129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.842 [2024-05-15 00:07:57.416819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.842 [2024-05-15 00:07:57.416829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.416984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.416995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.843 [2024-05-15 00:07:57.417545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.843 [2024-05-15 00:07:57.417555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d610 is same with the state(5) to be set 00:25:56.843 [2024-05-15 00:07:57.417566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.843 [2024-05-15 00:07:57.417574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.844 [2024-05-15 00:07:57.417582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126264 len:8 PRP1 0x0 PRP2 0x0 00:25:56.844 [2024-05-15 00:07:57.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.844 [2024-05-15 00:07:57.417637] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa4d610 was disconnected and freed. reset controller. 00:25:56.844 [2024-05-15 00:07:57.420383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.844 [2024-05-15 00:07:57.420433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:56.844 [2024-05-15 00:07:57.421217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-05-15 00:07:57.421576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-05-15 00:07:57.421588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:56.844 [2024-05-15 00:07:57.421599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:56.844 [2024-05-15 00:07:57.421772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:56.844 [2024-05-15 00:07:57.421943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.844 [2024-05-15 00:07:57.421953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.844 [2024-05-15 00:07:57.421963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.844 [2024-05-15 00:07:57.424664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.433580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.434182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.434626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.434667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.434701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.435142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.435329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.435340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.435349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.438019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.446485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.447148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.447582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.447624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.447657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.448180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.448351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.448362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.448371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.450965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.459293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.459943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.460453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.460497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.460529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.460984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.461157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.461167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.461176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.463773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.472029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.472676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.473087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.473128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.473160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.473769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.474211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.474222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.474231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.476841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.484850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.485476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.485968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.486006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.486015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.486174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.486361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.486372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.486380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.488995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.497743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.498302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.498730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.498742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.498752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.498919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.499086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.499096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.499105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.501721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.510631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.511263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.511749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.511789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.511821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.512243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.512411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.512422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.512430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.514992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.523493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.524093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.524547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.524590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.524621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.525079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.104 [2024-05-15 00:07:57.525251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.104 [2024-05-15 00:07:57.525261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.104 [2024-05-15 00:07:57.525270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.104 [2024-05-15 00:07:57.527840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.104 [2024-05-15 00:07:57.536254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.104 [2024-05-15 00:07:57.536865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.537276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.104 [2024-05-15 00:07:57.537338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.104 [2024-05-15 00:07:57.537348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.104 [2024-05-15 00:07:57.537516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.537683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.537694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.537703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.540271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.549037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.549691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.550178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.550244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.550254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.550421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.550588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.550598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.550607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.553164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.561906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.562492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.563002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.563042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.563071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.563241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.563409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.563419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.563428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.565984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.574693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.575302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.575818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.575859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.575898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.576388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.576555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.576565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.576574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.579130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.587498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.588087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.588577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.588621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.588654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.589229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.589469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.589483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.589495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.593271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.600837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.601369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.601801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.601842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.601873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.602328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.602496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.602506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.602515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.605098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.613649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.614274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.614787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.614828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.614859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.615286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.615454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.615464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.615473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.618032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.626379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.627000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.627479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.627522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.627553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.628148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.628577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.628588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.628596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.631151] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.639179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.639807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.640243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.640256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.640265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.640432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.640599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.640609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.640618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.643170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.651981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.652556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.653040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.653080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.653112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.653606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.653777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.653787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.653796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.656361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.664781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.665389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.665802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.665815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.665824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.665995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.105 [2024-05-15 00:07:57.666166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.105 [2024-05-15 00:07:57.666176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.105 [2024-05-15 00:07:57.666185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.105 [2024-05-15 00:07:57.668889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.105 [2024-05-15 00:07:57.677811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.105 [2024-05-15 00:07:57.678409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.678824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.105 [2024-05-15 00:07:57.678865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.105 [2024-05-15 00:07:57.678897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.105 [2024-05-15 00:07:57.679504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.106 [2024-05-15 00:07:57.679790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.106 [2024-05-15 00:07:57.679801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.106 [2024-05-15 00:07:57.679809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.106 [2024-05-15 00:07:57.682503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.106 [2024-05-15 00:07:57.690765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.106 [2024-05-15 00:07:57.691362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.106 [2024-05-15 00:07:57.691664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.106 [2024-05-15 00:07:57.691677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.106 [2024-05-15 00:07:57.691686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.106 [2024-05-15 00:07:57.691858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.106 [2024-05-15 00:07:57.692029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.106 [2024-05-15 00:07:57.692039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.106 [2024-05-15 00:07:57.692051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.365 [2024-05-15 00:07:57.694755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.365 [2024-05-15 00:07:57.703896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.365 [2024-05-15 00:07:57.704525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.705033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.705073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.365 [2024-05-15 00:07:57.705105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.365 [2024-05-15 00:07:57.705729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.365 [2024-05-15 00:07:57.705897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.365 [2024-05-15 00:07:57.705907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.365 [2024-05-15 00:07:57.705916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.365 [2024-05-15 00:07:57.708476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.365 [2024-05-15 00:07:57.716657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.365 [2024-05-15 00:07:57.717257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.717764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.717804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.365 [2024-05-15 00:07:57.717837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.365 [2024-05-15 00:07:57.718450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.365 [2024-05-15 00:07:57.718952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.365 [2024-05-15 00:07:57.718963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.365 [2024-05-15 00:07:57.718972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.365 [2024-05-15 00:07:57.721531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.365 [2024-05-15 00:07:57.729406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.365 [2024-05-15 00:07:57.730023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.730535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.730577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.365 [2024-05-15 00:07:57.730609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.365 [2024-05-15 00:07:57.731218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.365 [2024-05-15 00:07:57.731753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.365 [2024-05-15 00:07:57.731767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.365 [2024-05-15 00:07:57.731779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.365 [2024-05-15 00:07:57.735563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.365 [2024-05-15 00:07:57.742686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.365 [2024-05-15 00:07:57.743310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.743736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.743776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.365 [2024-05-15 00:07:57.743808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.365 [2024-05-15 00:07:57.744095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.365 [2024-05-15 00:07:57.744268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.365 [2024-05-15 00:07:57.744279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.365 [2024-05-15 00:07:57.744288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.365 [2024-05-15 00:07:57.746887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.365 [2024-05-15 00:07:57.755450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.365 [2024-05-15 00:07:57.756068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.756558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.365 [2024-05-15 00:07:57.756600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.365 [2024-05-15 00:07:57.756632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.365 [2024-05-15 00:07:57.757067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.365 [2024-05-15 00:07:57.757239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.365 [2024-05-15 00:07:57.757249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.365 [2024-05-15 00:07:57.757258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.759814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.768142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.768725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.769155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.769167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.769175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.769362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.769529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.769540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.769549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.772106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.780891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.781512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.782018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.782058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.782090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.782283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.782450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.782460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.782469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.785024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.793574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.794116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.794618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.794661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.794693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.795088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.795262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.795273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.795282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.797826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.806367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.806974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.807404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.807417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.807427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.807593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.807761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.807771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.807780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.810341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.819170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.819757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.820207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.820249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.820280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.820876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.821129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.821139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.821148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.823709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.831955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.832561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.832921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.832962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.832993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.833337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.833505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.833515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.833524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.836061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.844797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.845391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.845820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.845833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.845842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.846013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.846186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.846208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.846217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.848837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.857612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.858240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.858590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.858605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.858614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.858781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.858947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.858957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.858966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.861559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.870514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.871182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.871631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.871671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.366 [2024-05-15 00:07:57.871703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.366 [2024-05-15 00:07:57.872179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.366 [2024-05-15 00:07:57.872425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.366 [2024-05-15 00:07:57.872439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.366 [2024-05-15 00:07:57.872451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.366 [2024-05-15 00:07:57.876229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.366 [2024-05-15 00:07:57.883911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.366 [2024-05-15 00:07:57.884553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.884964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.366 [2024-05-15 00:07:57.885004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.885035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.885647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.886172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.886182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.886196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.888759] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.367 [2024-05-15 00:07:57.896887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.367 [2024-05-15 00:07:57.897508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.898000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.898040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.898079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.898686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.899240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.899251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.899260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.901819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.367 [2024-05-15 00:07:57.909716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.367 [2024-05-15 00:07:57.910268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.910676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.910716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.910748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.911239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.911407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.911418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.911427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.913992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.367 [2024-05-15 00:07:57.922569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.367 [2024-05-15 00:07:57.923114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.923623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.923664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.923695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.924250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.924424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.924434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.924444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.927141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.367 [2024-05-15 00:07:57.935616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.367 [2024-05-15 00:07:57.936246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.936653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.936694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.936725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.937244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.937419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.937430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.937439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.940107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.367 [2024-05-15 00:07:57.948539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.367 [2024-05-15 00:07:57.949072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.949477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.367 [2024-05-15 00:07:57.949522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.367 [2024-05-15 00:07:57.949556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.367 [2024-05-15 00:07:57.950157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.367 [2024-05-15 00:07:57.950372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.367 [2024-05-15 00:07:57.950383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.367 [2024-05-15 00:07:57.950392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.367 [2024-05-15 00:07:57.953085] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:57.961417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:57.961944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.962358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.962400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:57.962432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:57.963027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:57.963379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:57.963394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:57.963407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:57.967183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:57.974930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:57.975572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.975986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.976027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:57.976059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:57.976633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:57.976811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:57.976822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:57.976831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:57.979531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:57.987837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:57.988442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.988932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:57.988972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:57.989004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:57.989463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:57.989636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:57.989646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:57.989655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:57.992310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:58.000600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:58.001134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.001421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.001434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:58.001443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:58.001610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:58.001776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:58.001787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:58.001795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:58.004356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:58.013408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:58.013948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.014353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.014395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:58.014429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:58.014595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:58.014762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:58.014775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:58.014784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:58.017344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:58.026160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.628 [2024-05-15 00:07:58.026709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.027069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.628 [2024-05-15 00:07:58.027109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.628 [2024-05-15 00:07:58.027142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.628 [2024-05-15 00:07:58.027559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.628 [2024-05-15 00:07:58.027728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.628 [2024-05-15 00:07:58.027738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.628 [2024-05-15 00:07:58.027747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.628 [2024-05-15 00:07:58.030308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.628 [2024-05-15 00:07:58.038959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.039579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.039947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.039987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.040020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.040570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.040738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.040749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.040758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.043389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.051804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.052417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.052879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.052920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.052953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.053381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.053556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.053566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.053578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.056137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.064647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.065187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.065537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.065550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.065559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.065726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.065893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.065903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.065911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.068478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.077358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.077880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.078051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.078063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.078072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.078245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.078412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.078422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.078431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.080987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.090050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.090615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.091025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.091064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.091096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.091631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.091799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.091809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.091818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.094411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.102863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.103440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.103702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.103742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.103773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.104019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.104187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.104202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.104211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.106767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.115659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.116171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.116668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.116709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.116741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.116941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.117109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.117119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.117128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.119692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.128504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.128976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.129376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.129390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.129399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.129570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.129743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.129753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.129762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.629 [2024-05-15 00:07:58.132458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.629 [2024-05-15 00:07:58.141535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.629 [2024-05-15 00:07:58.142082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.142386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.629 [2024-05-15 00:07:58.142399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.629 [2024-05-15 00:07:58.142408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.629 [2024-05-15 00:07:58.142581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.629 [2024-05-15 00:07:58.142753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.629 [2024-05-15 00:07:58.142763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.629 [2024-05-15 00:07:58.142772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.145474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.630 [2024-05-15 00:07:58.154554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.630 [2024-05-15 00:07:58.155018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.155414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.155427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.630 [2024-05-15 00:07:58.155436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.630 [2024-05-15 00:07:58.155607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.630 [2024-05-15 00:07:58.155779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.630 [2024-05-15 00:07:58.155789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.630 [2024-05-15 00:07:58.155798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.158499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.630 [2024-05-15 00:07:58.167572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.630 [2024-05-15 00:07:58.168185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.168544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.168557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.630 [2024-05-15 00:07:58.168567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.630 [2024-05-15 00:07:58.168748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.630 [2024-05-15 00:07:58.168930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.630 [2024-05-15 00:07:58.168941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.630 [2024-05-15 00:07:58.168951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.171770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.630 [2024-05-15 00:07:58.180527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.630 [2024-05-15 00:07:58.181081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.181507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.181521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.630 [2024-05-15 00:07:58.181531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.630 [2024-05-15 00:07:58.181713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.630 [2024-05-15 00:07:58.181894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.630 [2024-05-15 00:07:58.181905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.630 [2024-05-15 00:07:58.181914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.184894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.630 [2024-05-15 00:07:58.193767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.630 [2024-05-15 00:07:58.194394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.194832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.194872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.630 [2024-05-15 00:07:58.194904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.630 [2024-05-15 00:07:58.195124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.630 [2024-05-15 00:07:58.195321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.630 [2024-05-15 00:07:58.195333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.630 [2024-05-15 00:07:58.195342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.198106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.630 [2024-05-15 00:07:58.206704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.630 [2024-05-15 00:07:58.207322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.207698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.630 [2024-05-15 00:07:58.207710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.630 [2024-05-15 00:07:58.207720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.630 [2024-05-15 00:07:58.207891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.630 [2024-05-15 00:07:58.208063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.630 [2024-05-15 00:07:58.208074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.630 [2024-05-15 00:07:58.208083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.630 [2024-05-15 00:07:58.210784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.891 [2024-05-15 00:07:58.219700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.891 [2024-05-15 00:07:58.220259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.220666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.220714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.891 [2024-05-15 00:07:58.220746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.891 [2024-05-15 00:07:58.221334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.891 [2024-05-15 00:07:58.221508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.891 [2024-05-15 00:07:58.221518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.891 [2024-05-15 00:07:58.221527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.891 [2024-05-15 00:07:58.224229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.891 [2024-05-15 00:07:58.232413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.891 [2024-05-15 00:07:58.233047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.233459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.233501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.891 [2024-05-15 00:07:58.233532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.891 [2024-05-15 00:07:58.233819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.891 [2024-05-15 00:07:58.233986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.891 [2024-05-15 00:07:58.233996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.891 [2024-05-15 00:07:58.234005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.891 [2024-05-15 00:07:58.236599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.891 [2024-05-15 00:07:58.245188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.891 [2024-05-15 00:07:58.245815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.246231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.246272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.891 [2024-05-15 00:07:58.246304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.891 [2024-05-15 00:07:58.246668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.891 [2024-05-15 00:07:58.246907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.891 [2024-05-15 00:07:58.246922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.891 [2024-05-15 00:07:58.246934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.891 [2024-05-15 00:07:58.250725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.891 [2024-05-15 00:07:58.258573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.891 [2024-05-15 00:07:58.259125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.259493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.259534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.891 [2024-05-15 00:07:58.259573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.891 [2024-05-15 00:07:58.260143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.891 [2024-05-15 00:07:58.260316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.891 [2024-05-15 00:07:58.260327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.891 [2024-05-15 00:07:58.260336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.891 [2024-05-15 00:07:58.262970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.891 [2024-05-15 00:07:58.271375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.891 [2024-05-15 00:07:58.271940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.272436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-05-15 00:07:58.272450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.891 [2024-05-15 00:07:58.272459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.891 [2024-05-15 00:07:58.272620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.891 [2024-05-15 00:07:58.272781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.891 [2024-05-15 00:07:58.272792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.891 [2024-05-15 00:07:58.272800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.891 [2024-05-15 00:07:58.275368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.284265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.284815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.285263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.285307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.285339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.285851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.286010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.286019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.286027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.288596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.297064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.297633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.298041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.298080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.298112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.298340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.298507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.298518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.298527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.301087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.309869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.310415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.310636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.310677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.310708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.311108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.311291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.311302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.311311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.313909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.322675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.323252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.323672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.323712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.323744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.324231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.324400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.324410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.324418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.326974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.335506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.336131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.336500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.336542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.336574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.337097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.337346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.337361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.337373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.341153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.348933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.349490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.349978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.350016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.350026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.350213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.350382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.350392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.350401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.352994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.361800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.362425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.362765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.362805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.892 [2024-05-15 00:07:58.362837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.892 [2024-05-15 00:07:58.363442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.892 [2024-05-15 00:07:58.363645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.892 [2024-05-15 00:07:58.363655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.892 [2024-05-15 00:07:58.363664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.892 [2024-05-15 00:07:58.366248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.892 [2024-05-15 00:07:58.374624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.892 [2024-05-15 00:07:58.375206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.375623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-05-15 00:07:58.375663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.375695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.376161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.376333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.376344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.376355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.378954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.387452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.388043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.388524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.388566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.388599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.389205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.389648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.389658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.389667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.392227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.400213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.400774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.401237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.401279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.401312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.401906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.402481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.402491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.402500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.405054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.413058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.413692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.414090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.414102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.414111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.414282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.414449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.414459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.414470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.417063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.425898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.426490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.426971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.427011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.427044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.427392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.427560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.427571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.427580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.430166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.438778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.439316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.439714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.439726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.439736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.439903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.440069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.440080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.440088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.442803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.451703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.452244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.452665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.452705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.893 [2024-05-15 00:07:58.452737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.893 [2024-05-15 00:07:58.453344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.893 [2024-05-15 00:07:58.453775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.893 [2024-05-15 00:07:58.453786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.893 [2024-05-15 00:07:58.453795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.893 [2024-05-15 00:07:58.456467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.893 [2024-05-15 00:07:58.464672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.893 [2024-05-15 00:07:58.465267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-05-15 00:07:58.465624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-05-15 00:07:58.465664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.894 [2024-05-15 00:07:58.465696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.894 [2024-05-15 00:07:58.466303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.894 [2024-05-15 00:07:58.466482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.894 [2024-05-15 00:07:58.466492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.894 [2024-05-15 00:07:58.466501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.894 [2024-05-15 00:07:58.469087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.894 [2024-05-15 00:07:58.477629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.894 [2024-05-15 00:07:58.478163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-05-15 00:07:58.478546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-05-15 00:07:58.478586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:57.894 [2024-05-15 00:07:58.478621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:57.894 [2024-05-15 00:07:58.478842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:57.894 [2024-05-15 00:07:58.479082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.894 [2024-05-15 00:07:58.479096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.894 [2024-05-15 00:07:58.479108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.154 [2024-05-15 00:07:58.483034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.154 [2024-05-15 00:07:58.490822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.154 [2024-05-15 00:07:58.491396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.491800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.491841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.154 [2024-05-15 00:07:58.491873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.154 [2024-05-15 00:07:58.492483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.154 [2024-05-15 00:07:58.492899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.154 [2024-05-15 00:07:58.492909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.154 [2024-05-15 00:07:58.492918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.154 [2024-05-15 00:07:58.495537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.154 [2024-05-15 00:07:58.503556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.154 [2024-05-15 00:07:58.504117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.504333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.504375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.154 [2024-05-15 00:07:58.504407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.154 [2024-05-15 00:07:58.504737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.154 [2024-05-15 00:07:58.504904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.154 [2024-05-15 00:07:58.504915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.154 [2024-05-15 00:07:58.504923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.154 [2024-05-15 00:07:58.507484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.154 [2024-05-15 00:07:58.516348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.154 [2024-05-15 00:07:58.516993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.517477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.154 [2024-05-15 00:07:58.517518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.517543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.517709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.517877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.517887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.517896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.520410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.529039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.529675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.530159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.530214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.530246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.530843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.531011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.531021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.531029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.533632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.541839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.542469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.542955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.542995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.543027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.543538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.543706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.543716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.543725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.546305] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.554737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.555309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.555793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.555833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.555865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.556471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.556863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.556874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.556882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.559443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.567592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.568231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.568689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.568729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.568762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.569056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.569228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.569239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.569248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.571802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.580364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.580988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.581344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.581385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.581432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.581994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.582162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.582172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.582181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.584790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.593225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.593851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.594232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.594274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.594306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.594769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.594928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.594937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.594946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.597608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.155 [2024-05-15 00:07:58.606064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.155 [2024-05-15 00:07:58.606725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.607217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.155 [2024-05-15 00:07:58.607259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.155 [2024-05-15 00:07:58.607291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.155 [2024-05-15 00:07:58.607749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.155 [2024-05-15 00:07:58.607916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.155 [2024-05-15 00:07:58.607926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.155 [2024-05-15 00:07:58.607935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.155 [2024-05-15 00:07:58.610494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.618875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.619321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.619679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.619719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.619751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.620235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.620403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.620414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.620423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.622980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.631607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.632122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.632542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.632583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.632615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.633052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.633225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.633235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.633244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.635853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.644442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.645085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.645519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.645560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.645593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.646061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.646233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.646243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.646252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.648856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.657201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.657841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.658262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.658274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.658284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.658451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.658621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.658632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.658640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.661224] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.670061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.670701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.671166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.671219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.671262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.671501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.671741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.671755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.671767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.675541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.683442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.684085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.684569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.684610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.684643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.685160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.685331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.685341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.685350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.687942] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.696277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.696831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.697254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.697268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.697278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.697449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.697620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.156 [2024-05-15 00:07:58.697634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.156 [2024-05-15 00:07:58.697643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.156 [2024-05-15 00:07:58.700372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.156 [2024-05-15 00:07:58.709209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.156 [2024-05-15 00:07:58.709859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.710275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.156 [2024-05-15 00:07:58.710318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.156 [2024-05-15 00:07:58.710351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.156 [2024-05-15 00:07:58.710946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.156 [2024-05-15 00:07:58.711199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.157 [2024-05-15 00:07:58.711210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.157 [2024-05-15 00:07:58.711235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.157 [2024-05-15 00:07:58.713896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.157 [2024-05-15 00:07:58.722051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.157 [2024-05-15 00:07:58.722674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.157 [2024-05-15 00:07:58.722957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.157 [2024-05-15 00:07:58.722997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.157 [2024-05-15 00:07:58.723030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.157 [2024-05-15 00:07:58.723642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.157 [2024-05-15 00:07:58.724221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.157 [2024-05-15 00:07:58.724232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.157 [2024-05-15 00:07:58.724240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.157 [2024-05-15 00:07:58.726784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.157 [2024-05-15 00:07:58.734766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.157 [2024-05-15 00:07:58.735390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.157 [2024-05-15 00:07:58.735834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.157 [2024-05-15 00:07:58.735874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.157 [2024-05-15 00:07:58.735907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.157 [2024-05-15 00:07:58.736407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.157 [2024-05-15 00:07:58.736575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.157 [2024-05-15 00:07:58.736585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.157 [2024-05-15 00:07:58.736596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.157 [2024-05-15 00:07:58.739156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.417 [2024-05-15 00:07:58.747676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.417 [2024-05-15 00:07:58.748291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.417 [2024-05-15 00:07:58.748637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.417 [2024-05-15 00:07:58.748650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.417 [2024-05-15 00:07:58.748659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.417 [2024-05-15 00:07:58.748831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.417 [2024-05-15 00:07:58.749001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.417 [2024-05-15 00:07:58.749012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.417 [2024-05-15 00:07:58.749021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.417 [2024-05-15 00:07:58.751719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.417 [2024-05-15 00:07:58.760357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.417 [2024-05-15 00:07:58.760732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.417 [2024-05-15 00:07:58.761081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.417 [2024-05-15 00:07:58.761122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.417 [2024-05-15 00:07:58.761154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.761765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.762262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.762273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.762282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.765898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.774073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.774705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.775038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.775050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.775060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.775232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.775400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.775410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.775418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.777978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.786855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.787500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.787991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.788032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.788077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.788258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.788425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.788436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.788445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.791037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.799634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.800204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.800691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.800731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.800763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.801378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.801550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.801561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.801570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.804179] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.812335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.812919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.813281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.813324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.813356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.813952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.814494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.814504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.814512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.816999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.824999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.825620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.826110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.826149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.826181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.826672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.826839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.826850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.826858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.829419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.837749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.838390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.838873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.838913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.838945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.839539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.839707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.839717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.839726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.842292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.850488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.851121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.851614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.851656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.851687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.852269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.852449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.852459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.852468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.855058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.863198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.863852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.864342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.864382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.864414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.865009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.865616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.865651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.418 [2024-05-15 00:07:58.865682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.418 [2024-05-15 00:07:58.868285] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.418 [2024-05-15 00:07:58.875918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.418 [2024-05-15 00:07:58.876421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.876803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.418 [2024-05-15 00:07:58.876843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.418 [2024-05-15 00:07:58.876874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.418 [2024-05-15 00:07:58.877388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.418 [2024-05-15 00:07:58.877556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.418 [2024-05-15 00:07:58.877566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.877575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.880184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.888614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.889230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.889657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.889696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.889728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.890341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.890864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.890874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.890883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.893447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.901415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.902058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.902543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.902593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.902625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.903233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.903736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.903750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.903762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.907545] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.915166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.915684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.916150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.916189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.916236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.916491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.916663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.916673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.916682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.919287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.927982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.928589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.929025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.929064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.929096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.929709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.930011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.930021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.930030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.932593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.940743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.941363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.941715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.941761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.941801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.942413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.942632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.942642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.942651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.945214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.953539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.954171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.954598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.954611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.954620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.954791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.954963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.954974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.954982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.957715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.966495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.966924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.967357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.967370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.967379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.967547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.967714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.967724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.967733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.970388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.979420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.980049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.980534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.980576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.980609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.981112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.981287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.981298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.981307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.983941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:58.992209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:58.992858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.993114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:58.993154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:58.993186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.419 [2024-05-15 00:07:58.993800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.419 [2024-05-15 00:07:58.994012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.419 [2024-05-15 00:07:58.994022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.419 [2024-05-15 00:07:58.994031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.419 [2024-05-15 00:07:58.996594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.419 [2024-05-15 00:07:59.005253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.419 [2024-05-15 00:07:59.005882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:59.006071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.419 [2024-05-15 00:07:59.006084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.419 [2024-05-15 00:07:59.006093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.420 [2024-05-15 00:07:59.006269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.420 [2024-05-15 00:07:59.006442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.420 [2024-05-15 00:07:59.006452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.420 [2024-05-15 00:07:59.006461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.680 [2024-05-15 00:07:59.009158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.680 [2024-05-15 00:07:59.018075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.680 [2024-05-15 00:07:59.018694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.019159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.019214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.680 [2024-05-15 00:07:59.019247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.680 [2024-05-15 00:07:59.019704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.680 [2024-05-15 00:07:59.019875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.680 [2024-05-15 00:07:59.019885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.680 [2024-05-15 00:07:59.019894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.680 [2024-05-15 00:07:59.022458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.680 [2024-05-15 00:07:59.030927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.680 [2024-05-15 00:07:59.031485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.031849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.031889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.680 [2024-05-15 00:07:59.031921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.680 [2024-05-15 00:07:59.032528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.680 [2024-05-15 00:07:59.033037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.680 [2024-05-15 00:07:59.033048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.680 [2024-05-15 00:07:59.033056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.680 [2024-05-15 00:07:59.035656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.680 [2024-05-15 00:07:59.043756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.680 [2024-05-15 00:07:59.044398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.044800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.044835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.680 [2024-05-15 00:07:59.044844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.680 [2024-05-15 00:07:59.045002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.680 [2024-05-15 00:07:59.045160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.680 [2024-05-15 00:07:59.045170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.680 [2024-05-15 00:07:59.045178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.680 [2024-05-15 00:07:59.047764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.680 [2024-05-15 00:07:59.056544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.680 [2024-05-15 00:07:59.056939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.057368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.680 [2024-05-15 00:07:59.057410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.680 [2024-05-15 00:07:59.057441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.680 [2024-05-15 00:07:59.057909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.058076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.058089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.058098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.060715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.069364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.070006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.070419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.070461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.070493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.071053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.071224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.071235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.071244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.073803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.082117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.082685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.083092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.083131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.083163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.083769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.084256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.084267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.084276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.086834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.094928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.095559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.096066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.096107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.096138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.096752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.097187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.097206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.097222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.100995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.108067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.108680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.109115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.109156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.109187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.109798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.110216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.110227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.110236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.112840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.120770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.121376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.121886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.121927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.121959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.122570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.123113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.123123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.123132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.125702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.133576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.134710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.134750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.134782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.135322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.135489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.135499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.135508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.138045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.146336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.146956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.147357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.147400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.147432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.147955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.148127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.148138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.148147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.150851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.159151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.159779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.160271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.160312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.160344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.160940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.161439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.161450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.161458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.164078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.171972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.172591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.173117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.681 [2024-05-15 00:07:59.173148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.681 [2024-05-15 00:07:59.173757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.681 [2024-05-15 00:07:59.174316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.681 [2024-05-15 00:07:59.174327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.681 [2024-05-15 00:07:59.174336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.681 [2024-05-15 00:07:59.176899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.681 [2024-05-15 00:07:59.184789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.681 [2024-05-15 00:07:59.185410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.681 [2024-05-15 00:07:59.185896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.185937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.185969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.186458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.186626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.186636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.186645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.189208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.197623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.198231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.198740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.198781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.198813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.199194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.199361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.199371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.199380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.201920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.210588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.211119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.211559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.211573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.211582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.211754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.211927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.211938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.211946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.214648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.223575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.224216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.224711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.224752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.224785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.225400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.225598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.225608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.225616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.228241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.236465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.237069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.237525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.237539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.237549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.237721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.237896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.237906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.237915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.240542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.249265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.249882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.250360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.250373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.250383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.250555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.250727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.250737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.250746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.253334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.682 [2024-05-15 00:07:59.261960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.682 [2024-05-15 00:07:59.262541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.262966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.682 [2024-05-15 00:07:59.263006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.682 [2024-05-15 00:07:59.263045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.682 [2024-05-15 00:07:59.263396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.682 [2024-05-15 00:07:59.263568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.682 [2024-05-15 00:07:59.263579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.682 [2024-05-15 00:07:59.263588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.682 [2024-05-15 00:07:59.266273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.274746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.275329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.275771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.275784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.275793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.275965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.276136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.276147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.276156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.278811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.287430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.288049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.288550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.288592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.288625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.289232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.289496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.289506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.289515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.292071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.300162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.300778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.301282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.301339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.301372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.301546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.301704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.301714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.301722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.304210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.312829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.313427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.313935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.313979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.314010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.314621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.314789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.314799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.314808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.317452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.325630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.326249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.326688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.326729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.326760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.327371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.327961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.327971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.327980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.330495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.338307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.338887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.339324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.339337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.339347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.943 [2024-05-15 00:07:59.339514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.943 [2024-05-15 00:07:59.339684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.943 [2024-05-15 00:07:59.339694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.943 [2024-05-15 00:07:59.339703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.943 [2024-05-15 00:07:59.342267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.943 [2024-05-15 00:07:59.351128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.943 [2024-05-15 00:07:59.351734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.352246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.943 [2024-05-15 00:07:59.352287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.943 [2024-05-15 00:07:59.352319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.352713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.352881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.352891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.352900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.355458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.363850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.364470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.364969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.365009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.365042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.365602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.365770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.365780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.365789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.368350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.376524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.377110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.377555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.377568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.377577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.377744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.377911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.377923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.377932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.380489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.389275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.389891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.390372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.390414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.390446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.391040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.391212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.391223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.391231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.393788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.401941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.402570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.403059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.403103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.403112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.403296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.403464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.403474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.403483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.406042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.414686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.415291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.415726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.415739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.415748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.415916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.416083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.416093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.416104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.418715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.427533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.428057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.428561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.428603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.428635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.429243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.429842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.429872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.429881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.433447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.441273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.441892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.442316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.442358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.442389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.442939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.443106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.443116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.443125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.445687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.454055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.454671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.455160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.455172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.455181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.455374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.455546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.455557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.455566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.944 [2024-05-15 00:07:59.458149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.944 [2024-05-15 00:07:59.466868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.944 [2024-05-15 00:07:59.467481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.467923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.944 [2024-05-15 00:07:59.467935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.944 [2024-05-15 00:07:59.467945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.944 [2024-05-15 00:07:59.468117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.944 [2024-05-15 00:07:59.468293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.944 [2024-05-15 00:07:59.468304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.944 [2024-05-15 00:07:59.468313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.945 [2024-05-15 00:07:59.471011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.945 [2024-05-15 00:07:59.479777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.945 [2024-05-15 00:07:59.480405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.480917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.480958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.945 [2024-05-15 00:07:59.480990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.945 [2024-05-15 00:07:59.481204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.945 [2024-05-15 00:07:59.481391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.945 [2024-05-15 00:07:59.481402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.945 [2024-05-15 00:07:59.481411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.945 [2024-05-15 00:07:59.484103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.945 [2024-05-15 00:07:59.492642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.945 [2024-05-15 00:07:59.493236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.493598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.493639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.945 [2024-05-15 00:07:59.493671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.945 [2024-05-15 00:07:59.494178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.945 [2024-05-15 00:07:59.494351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.945 [2024-05-15 00:07:59.494362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.945 [2024-05-15 00:07:59.494370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.945 [2024-05-15 00:07:59.497031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.945 [2024-05-15 00:07:59.505494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.945 [2024-05-15 00:07:59.506102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.506520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.506534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.945 [2024-05-15 00:07:59.506544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.945 [2024-05-15 00:07:59.506718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.945 [2024-05-15 00:07:59.506877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.945 [2024-05-15 00:07:59.506886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.945 [2024-05-15 00:07:59.506895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.945 [2024-05-15 00:07:59.509470] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.945 [2024-05-15 00:07:59.518239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.945 [2024-05-15 00:07:59.518881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.519292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.519334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.945 [2024-05-15 00:07:59.519367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.945 [2024-05-15 00:07:59.519909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.945 [2024-05-15 00:07:59.520076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.945 [2024-05-15 00:07:59.520087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.945 [2024-05-15 00:07:59.520095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.945 [2024-05-15 00:07:59.522659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.945 [2024-05-15 00:07:59.531177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.945 [2024-05-15 00:07:59.531776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.532269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.945 [2024-05-15 00:07:59.532311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:58.945 [2024-05-15 00:07:59.532342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:58.945 [2024-05-15 00:07:59.532551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:58.945 [2024-05-15 00:07:59.532723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.945 [2024-05-15 00:07:59.532734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.945 [2024-05-15 00:07:59.532743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.535449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.205 [2024-05-15 00:07:59.544077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.205 [2024-05-15 00:07:59.544638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.545179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.545231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.205 [2024-05-15 00:07:59.545265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.205 [2024-05-15 00:07:59.545860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.205 [2024-05-15 00:07:59.546444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.205 [2024-05-15 00:07:59.546456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.205 [2024-05-15 00:07:59.546465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.549099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.205 [2024-05-15 00:07:59.556893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.205 [2024-05-15 00:07:59.557506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.557859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.557899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.205 [2024-05-15 00:07:59.557933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.205 [2024-05-15 00:07:59.558354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.205 [2024-05-15 00:07:59.558522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.205 [2024-05-15 00:07:59.558533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.205 [2024-05-15 00:07:59.558541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.561134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.205 [2024-05-15 00:07:59.569931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.205 [2024-05-15 00:07:59.570586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.571098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.571138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.205 [2024-05-15 00:07:59.571170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.205 [2024-05-15 00:07:59.571393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.205 [2024-05-15 00:07:59.571566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.205 [2024-05-15 00:07:59.571576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.205 [2024-05-15 00:07:59.571585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.574259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.205 [2024-05-15 00:07:59.582740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.205 [2024-05-15 00:07:59.583249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.583724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.583772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.205 [2024-05-15 00:07:59.583804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.205 [2024-05-15 00:07:59.584348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.205 [2024-05-15 00:07:59.584521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.205 [2024-05-15 00:07:59.584531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.205 [2024-05-15 00:07:59.584540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.587115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.205 [2024-05-15 00:07:59.595622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.205 [2024-05-15 00:07:59.596253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.596670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.205 [2024-05-15 00:07:59.596710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.205 [2024-05-15 00:07:59.596742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.205 [2024-05-15 00:07:59.597210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.205 [2024-05-15 00:07:59.597379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.205 [2024-05-15 00:07:59.597389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.205 [2024-05-15 00:07:59.597398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.205 [2024-05-15 00:07:59.599958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.608419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.609090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.609489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.609531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.609562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.610095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.610269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.610280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.610289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.612846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.621151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.621800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.622265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.622307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.622346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.622944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.623290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.623302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.623311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.625873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.633963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.634600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.635029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.635069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.635101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.635375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.635543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.635554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.635562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.638149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.646765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.647396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.647859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.647898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.647930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.648522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.648689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.648699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.648708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.651303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.659568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.660226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.660594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.660635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.660668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.661278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.661446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.661456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.661465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.664059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.672357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.672919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.673397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.673439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.673471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.674065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.674574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.674585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.674594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.677221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.685139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.685688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.686400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.686443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.686476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.686728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.686895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.686906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.686915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.689509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.697932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.698495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.698924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.698966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.698999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.699442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.699613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.699624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.699632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.702196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.710631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.711261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.711622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.711662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.711694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.712300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.712808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.206 [2024-05-15 00:07:59.712819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.206 [2024-05-15 00:07:59.712828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.206 [2024-05-15 00:07:59.715400] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.206 [2024-05-15 00:07:59.723436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.206 [2024-05-15 00:07:59.724065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.724361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.206 [2024-05-15 00:07:59.724375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.206 [2024-05-15 00:07:59.724385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.206 [2024-05-15 00:07:59.724574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.206 [2024-05-15 00:07:59.724746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.724757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.724767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.727497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.207 [2024-05-15 00:07:59.736418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.207 [2024-05-15 00:07:59.736907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.737321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.737363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.207 [2024-05-15 00:07:59.737395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.207 [2024-05-15 00:07:59.737864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.207 [2024-05-15 00:07:59.738031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.738045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.738054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.740754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.207 [2024-05-15 00:07:59.749286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.207 [2024-05-15 00:07:59.749929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.750300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.750342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.207 [2024-05-15 00:07:59.750373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.207 [2024-05-15 00:07:59.750602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.207 [2024-05-15 00:07:59.750775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.750785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.750794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.753466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.207 [2024-05-15 00:07:59.762158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.207 [2024-05-15 00:07:59.762732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.763141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.763181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.207 [2024-05-15 00:07:59.763227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.207 [2024-05-15 00:07:59.763788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.207 [2024-05-15 00:07:59.764029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.764043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.764055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.767835] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.207 [2024-05-15 00:07:59.775644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.207 [2024-05-15 00:07:59.776274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.776697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.776737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.207 [2024-05-15 00:07:59.776770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.207 [2024-05-15 00:07:59.777378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.207 [2024-05-15 00:07:59.777803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.777814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.777828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.780467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.207 [2024-05-15 00:07:59.788484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.207 [2024-05-15 00:07:59.789106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.789543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.207 [2024-05-15 00:07:59.789588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.207 [2024-05-15 00:07:59.789598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.207 [2024-05-15 00:07:59.789766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.207 [2024-05-15 00:07:59.789934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.207 [2024-05-15 00:07:59.789944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.207 [2024-05-15 00:07:59.789953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.207 [2024-05-15 00:07:59.792643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.468 [2024-05-15 00:07:59.801371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.468 [2024-05-15 00:07:59.802004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.802415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.802456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.468 [2024-05-15 00:07:59.802488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.468 [2024-05-15 00:07:59.802674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.468 [2024-05-15 00:07:59.802852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.468 [2024-05-15 00:07:59.802862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.468 [2024-05-15 00:07:59.802871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.468 [2024-05-15 00:07:59.805478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.468 [2024-05-15 00:07:59.814251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.468 [2024-05-15 00:07:59.814894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.815379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.815420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.468 [2024-05-15 00:07:59.815451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.468 [2024-05-15 00:07:59.815752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.468 [2024-05-15 00:07:59.815919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.468 [2024-05-15 00:07:59.815930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.468 [2024-05-15 00:07:59.815939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.468 [2024-05-15 00:07:59.818503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.468 [2024-05-15 00:07:59.827141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.468 [2024-05-15 00:07:59.827728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.828216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.828258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.468 [2024-05-15 00:07:59.828290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.468 [2024-05-15 00:07:59.828885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.468 [2024-05-15 00:07:59.829053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.468 [2024-05-15 00:07:59.829063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.468 [2024-05-15 00:07:59.829072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.468 [2024-05-15 00:07:59.831637] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.468 [2024-05-15 00:07:59.839961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.468 [2024-05-15 00:07:59.840545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.840967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.468 [2024-05-15 00:07:59.841008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.468 [2024-05-15 00:07:59.841039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.468 [2024-05-15 00:07:59.841646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.468 [2024-05-15 00:07:59.842140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.468 [2024-05-15 00:07:59.842150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.468 [2024-05-15 00:07:59.842159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.468 [2024-05-15 00:07:59.844803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.468 [2024-05-15 00:07:59.852774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.853422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.853787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.853801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.853810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.853982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.854157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.854167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.854176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.856794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.865615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.866236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.866655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.866706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.866716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.866883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.867050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.867060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.867069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.869675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.878420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.878970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.879490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.879535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.879567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.879832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.879990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.880000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.880009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.882614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.891093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.891670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.892095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.892135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.892168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.892775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.892943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.892953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.892962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.895534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.903853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.904514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.904927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.904967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.904999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.905610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.906067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.906077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.906086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.909784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.917458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.918027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.918389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.918430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.918462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.918945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.919112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.919123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.919131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.921697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.930270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.930826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.931304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.931347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.931378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.931941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.932108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.932119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.932128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.934731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.943026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.943658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.944100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.944140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.944152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.944326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.944494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.944504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.944513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.947138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.955797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.956415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.956828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.956868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.956900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.957143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.469 [2024-05-15 00:07:59.957328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.469 [2024-05-15 00:07:59.957339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.469 [2024-05-15 00:07:59.957348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.469 [2024-05-15 00:07:59.959960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.469 [2024-05-15 00:07:59.968563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.469 [2024-05-15 00:07:59.969128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.969626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.469 [2024-05-15 00:07:59.969668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.469 [2024-05-15 00:07:59.969699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.469 [2024-05-15 00:07:59.970288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:07:59.970456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:07:59.970466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:07:59.970475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:07:59.973099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:07:59.981397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:07:59.981960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:07:59.982382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:07:59.982396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:07:59.982406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:07:59.982580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:07:59.982752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:07:59.982763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:07:59.982772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:07:59.985511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:07:59.994310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:07:59.994870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:07:59.995357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:07:59.995370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:07:59.995380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:07:59.995551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:07:59.995722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:07:59.995733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:07:59.995742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:07:59.998394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:08:00.007375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:08:00.007899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.008255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.008268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:08:00.008278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:08:00.008449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:08:00.008621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:08:00.008631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:08:00.008640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:08:00.011346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:08:00.020429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:08:00.020976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.021399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.021413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:08:00.021422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:08:00.021594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:08:00.021769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:08:00.021780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:08:00.021789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:08:00.024493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:08:00.033425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:08:00.034079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.034433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.034447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:08:00.034456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:08:00.034628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:08:00.034799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:08:00.034810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:08:00.034819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:08:00.037517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.470 [2024-05-15 00:08:00.046449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.470 [2024-05-15 00:08:00.047005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.047405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.470 [2024-05-15 00:08:00.047418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.470 [2024-05-15 00:08:00.047428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.470 [2024-05-15 00:08:00.047600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.470 [2024-05-15 00:08:00.047773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.470 [2024-05-15 00:08:00.047784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.470 [2024-05-15 00:08:00.047793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.470 [2024-05-15 00:08:00.050503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.059419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.733 [2024-05-15 00:08:00.059993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.060395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.060410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.733 [2024-05-15 00:08:00.060420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.733 [2024-05-15 00:08:00.060602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.733 [2024-05-15 00:08:00.060785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.733 [2024-05-15 00:08:00.060805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.733 [2024-05-15 00:08:00.060814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.733 [2024-05-15 00:08:00.063704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.072461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.733 [2024-05-15 00:08:00.073011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.073417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.073430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.733 [2024-05-15 00:08:00.073440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.733 [2024-05-15 00:08:00.073611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.733 [2024-05-15 00:08:00.073783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.733 [2024-05-15 00:08:00.073793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.733 [2024-05-15 00:08:00.073802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.733 [2024-05-15 00:08:00.076506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.085425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.733 [2024-05-15 00:08:00.085953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.086298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.086311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.733 [2024-05-15 00:08:00.086321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.733 [2024-05-15 00:08:00.086493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.733 [2024-05-15 00:08:00.086664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.733 [2024-05-15 00:08:00.086674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.733 [2024-05-15 00:08:00.086683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.733 [2024-05-15 00:08:00.089385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.098474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.733 [2024-05-15 00:08:00.098960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.099407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.099420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.733 [2024-05-15 00:08:00.099429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.733 [2024-05-15 00:08:00.099601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.733 [2024-05-15 00:08:00.099777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.733 [2024-05-15 00:08:00.099787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.733 [2024-05-15 00:08:00.099800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.733 [2024-05-15 00:08:00.102511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.111503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.733 [2024-05-15 00:08:00.112029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.112322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.733 [2024-05-15 00:08:00.112335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.733 [2024-05-15 00:08:00.112344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.733 [2024-05-15 00:08:00.112516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.733 [2024-05-15 00:08:00.112688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.733 [2024-05-15 00:08:00.112699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.733 [2024-05-15 00:08:00.112708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.733 [2024-05-15 00:08:00.115744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.733 [2024-05-15 00:08:00.124422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.125042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.125494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.125539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.125572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.125913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.126081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.126091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.126100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.128790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.137479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.138126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.138425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.138438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.138447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.138631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.138803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.138813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.138823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.141579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.150486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.151131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.151567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.151609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.151641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.152252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.152567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.152578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.152587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.155273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.163369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.164008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.164473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.164515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.164547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.164740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.164911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.164922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.164931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.167625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.176348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.176889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.177313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.177326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.177336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.177508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.177679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.177690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.177699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.180447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.189308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.189841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.190318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.190362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.190395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.190935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.191094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.191104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.191128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.193808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.202235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.202763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.203104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.203144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.203176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.203640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.203813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.203823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.203832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.206554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.215263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.215886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.216284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.216328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.216360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.216821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.216993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.217003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.217012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.219683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.228247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.228860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.229343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.229385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.229418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.229840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.230012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.230022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.734 [2024-05-15 00:08:00.230031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.734 [2024-05-15 00:08:00.232755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.734 [2024-05-15 00:08:00.241250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.734 [2024-05-15 00:08:00.241864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.242288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.734 [2024-05-15 00:08:00.242302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.734 [2024-05-15 00:08:00.242312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.734 [2024-05-15 00:08:00.242487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.734 [2024-05-15 00:08:00.242659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.734 [2024-05-15 00:08:00.242670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.242679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.245408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.254248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.254857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.255275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.255316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.255348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.255722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.255894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.255904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.255914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.258601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.267205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.267784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.268271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.268321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.268354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.268774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.268947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.268957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.268966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.271588] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.280115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.280761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.281018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.281058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.281090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.281700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.281897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.281908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.281917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.284630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.292989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.293618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.294090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.294129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.294161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.294753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.294926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.294936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.294945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.297615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.305858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.306490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.306844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.306856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.306869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.307041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.307228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.307238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.307247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.309945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.735 [2024-05-15 00:08:00.318828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.735 [2024-05-15 00:08:00.319438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.319843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.735 [2024-05-15 00:08:00.319883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.735 [2024-05-15 00:08:00.319915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.735 [2024-05-15 00:08:00.320308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.735 [2024-05-15 00:08:00.320480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.735 [2024-05-15 00:08:00.320490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.735 [2024-05-15 00:08:00.320499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.735 [2024-05-15 00:08:00.323246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.995 [2024-05-15 00:08:00.331785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.995 [2024-05-15 00:08:00.332416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.332788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.332828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.995 [2024-05-15 00:08:00.332860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.995 [2024-05-15 00:08:00.333472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.995 [2024-05-15 00:08:00.334071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.995 [2024-05-15 00:08:00.334116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.995 [2024-05-15 00:08:00.334125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.995 [2024-05-15 00:08:00.336844] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.995 [2024-05-15 00:08:00.344710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.995 [2024-05-15 00:08:00.345342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.345715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.345728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.995 [2024-05-15 00:08:00.345737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.995 [2024-05-15 00:08:00.345913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.995 [2024-05-15 00:08:00.346084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.995 [2024-05-15 00:08:00.346095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.995 [2024-05-15 00:08:00.346104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.995 [2024-05-15 00:08:00.348776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.995 [2024-05-15 00:08:00.357566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.995 [2024-05-15 00:08:00.358181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.358591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.358631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.995 [2024-05-15 00:08:00.358664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.995 [2024-05-15 00:08:00.359168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.995 [2024-05-15 00:08:00.359346] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.995 [2024-05-15 00:08:00.359357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.995 [2024-05-15 00:08:00.359366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.995 [2024-05-15 00:08:00.361985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.995 [2024-05-15 00:08:00.370397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.995 [2024-05-15 00:08:00.371011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.371434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.995 [2024-05-15 00:08:00.371447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.995 [2024-05-15 00:08:00.371457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.995 [2024-05-15 00:08:00.371624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.995 [2024-05-15 00:08:00.371791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.371801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.371810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.374489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.383240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.383865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.384323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.384353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.384363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.384537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.384713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.384724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.384733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.387366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.396129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.396801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.397279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.397292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.397302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.397481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.397648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.397658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.397667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.400411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.409056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3724219 Killed "${NVMF_APP[@]}" "$@" 00:25:59.996 [2024-05-15 00:08:00.409676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.410027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.410039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.410049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.410226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:59.996 [2024-05-15 00:08:00.410401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.410412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.410421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 [2024-05-15 00:08:00.413123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3725720 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3725720 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3725720 ']' 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:59.996 [2024-05-15 00:08:00.422029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 00:08:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:59.996 [2024-05-15 00:08:00.422656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.423069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.423082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.423091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.423267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.423439] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.423449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.423458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.426153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.435067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.435692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.436032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.436045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.436054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.436231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.436406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.436417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.436426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.439124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.448036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.448645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.448988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.449000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.449010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.449181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.449365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.449376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.449386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.452077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.460978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.461613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.462034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.462046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.462056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.462230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.462401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.996 [2024-05-15 00:08:00.462412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.996 [2024-05-15 00:08:00.462421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.996 [2024-05-15 00:08:00.465116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.996 [2024-05-15 00:08:00.467231] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:25:59.996 [2024-05-15 00:08:00.467274] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.996 [2024-05-15 00:08:00.474037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.996 [2024-05-15 00:08:00.474659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.475084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.996 [2024-05-15 00:08:00.475096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.996 [2024-05-15 00:08:00.475106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.996 [2024-05-15 00:08:00.475283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.996 [2024-05-15 00:08:00.475455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.475465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.475474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.478170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.486948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.487575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.487787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.487799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.487809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.487985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.488156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.488167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.488176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.490879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.500005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.500640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.501063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.501076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.501086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.501263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.501436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.501446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.501455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.997 [2024-05-15 00:08:00.504156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.512930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.513550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.513972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.513984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.513994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.514166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.514342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.514353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.514362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.517064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.525833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.526455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.526880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.526892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.526902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.527075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.527254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.527265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.527274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.529972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.538723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.539366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.539707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.539719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.539729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.539900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.540072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.540082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.540091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.542790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.543507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:59.997 [2024-05-15 00:08:00.551751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.552380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.552808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.552820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.552830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.553003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.553177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.553188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.553202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.555901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.564796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.565392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.565790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.565803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.565812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.565984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.566160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.566171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.566180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.568919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.997 [2024-05-15 00:08:00.577828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.997 [2024-05-15 00:08:00.578449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.578878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.997 [2024-05-15 00:08:00.578890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:25:59.997 [2024-05-15 00:08:00.578900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:25:59.997 [2024-05-15 00:08:00.579072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:25:59.997 [2024-05-15 00:08:00.579248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.997 [2024-05-15 00:08:00.579259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.997 [2024-05-15 00:08:00.579269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.997 [2024-05-15 00:08:00.581969] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.590749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.591391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.591813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.591826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.258 [2024-05-15 00:08:00.591836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.258 [2024-05-15 00:08:00.592012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.258 [2024-05-15 00:08:00.592184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.258 [2024-05-15 00:08:00.592199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.258 [2024-05-15 00:08:00.592209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.258 [2024-05-15 00:08:00.594904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.603676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.604293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.604717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.604729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.258 [2024-05-15 00:08:00.604739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.258 [2024-05-15 00:08:00.604911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.258 [2024-05-15 00:08:00.605082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.258 [2024-05-15 00:08:00.605097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.258 [2024-05-15 00:08:00.605106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.258 [2024-05-15 00:08:00.607805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.616737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.617353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.617756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.617769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.258 [2024-05-15 00:08:00.617778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.258 [2024-05-15 00:08:00.617950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.258 [2024-05-15 00:08:00.618120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.258 [2024-05-15 00:08:00.618131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.258 [2024-05-15 00:08:00.618140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.258 [2024-05-15 00:08:00.619726] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.258 [2024-05-15 00:08:00.619752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.258 [2024-05-15 00:08:00.619761] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.258 [2024-05-15 00:08:00.619770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.258 [2024-05-15 00:08:00.619792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.258 [2024-05-15 00:08:00.619833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.258 [2024-05-15 00:08:00.619937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.258 [2024-05-15 00:08:00.619939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.258 [2024-05-15 00:08:00.620854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.629773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.630432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.630858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.630871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.258 [2024-05-15 00:08:00.630880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.258 [2024-05-15 00:08:00.631053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.258 [2024-05-15 00:08:00.631232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.258 [2024-05-15 00:08:00.631243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.258 [2024-05-15 00:08:00.631252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.258 [2024-05-15 00:08:00.633952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.642717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.643375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.643783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.643795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.258 [2024-05-15 00:08:00.643805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.258 [2024-05-15 00:08:00.643978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.258 [2024-05-15 00:08:00.644150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.258 [2024-05-15 00:08:00.644160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.258 [2024-05-15 00:08:00.644170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.258 [2024-05-15 00:08:00.646868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.258 [2024-05-15 00:08:00.655640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.258 [2024-05-15 00:08:00.656278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.258 [2024-05-15 00:08:00.656679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.656691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.656701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.656874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.657047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.657058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.657067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.659771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.668689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.669328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.669682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.669695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.669705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.669877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.670048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.670058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.670068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.672767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.681681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.682108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.682530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.682547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.682557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.682729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.682901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.682911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.682921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.685624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.694699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.695241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.695602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.695615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.695624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.695796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.695968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.695979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.695987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.698690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.707610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.708210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.708557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.708569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.708579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.708750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.708922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.708932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.708941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.711823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.720598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.721225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.721573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.721586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.721599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.721772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.721944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.721954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.721963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.724662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.733566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.734194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.734612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.734625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.734635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.734806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.734977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.734987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.734996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.737694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.746470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.747099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.747470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.747483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.747492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.747665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.747837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.747848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.747857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.750567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.759482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.760124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.760544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.760558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.760568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.760744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.760915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.760926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.760935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.763632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.772385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.773005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.773426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.773439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.773449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.773621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.773793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.773804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.773813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.776509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.785419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.786049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.786475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.786489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.786498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.786670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.786842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.786853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.786862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.259 [2024-05-15 00:08:00.789561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.259 [2024-05-15 00:08:00.798471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.259 [2024-05-15 00:08:00.799093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.799491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.259 [2024-05-15 00:08:00.799503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.259 [2024-05-15 00:08:00.799513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.259 [2024-05-15 00:08:00.799685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.259 [2024-05-15 00:08:00.799860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.259 [2024-05-15 00:08:00.799870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.259 [2024-05-15 00:08:00.799879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.260 [2024-05-15 00:08:00.802578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.260 [2024-05-15 00:08:00.811494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.260 [2024-05-15 00:08:00.812104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.812527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.812542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.260 [2024-05-15 00:08:00.812551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.260 [2024-05-15 00:08:00.812725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.260 [2024-05-15 00:08:00.812896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.260 [2024-05-15 00:08:00.812906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.260 [2024-05-15 00:08:00.812915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.260 [2024-05-15 00:08:00.815615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.260 [2024-05-15 00:08:00.824533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.260 [2024-05-15 00:08:00.825154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.825575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.825589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.260 [2024-05-15 00:08:00.825598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.260 [2024-05-15 00:08:00.825769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.260 [2024-05-15 00:08:00.825941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.260 [2024-05-15 00:08:00.825951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.260 [2024-05-15 00:08:00.825960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.260 [2024-05-15 00:08:00.828656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.260 [2024-05-15 00:08:00.837571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.260 [2024-05-15 00:08:00.838196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.838388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.260 [2024-05-15 00:08:00.838401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.260 [2024-05-15 00:08:00.838410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.260 [2024-05-15 00:08:00.838581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.260 [2024-05-15 00:08:00.838753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.260 [2024-05-15 00:08:00.838764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.260 [2024-05-15 00:08:00.838776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.260 [2024-05-15 00:08:00.841479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.520 [2024-05-15 00:08:00.850560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.520 [2024-05-15 00:08:00.851179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.851376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.851389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.520 [2024-05-15 00:08:00.851399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.520 [2024-05-15 00:08:00.851570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.520 [2024-05-15 00:08:00.851741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.520 [2024-05-15 00:08:00.851752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.520 [2024-05-15 00:08:00.851761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.520 [2024-05-15 00:08:00.854466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.520 [2024-05-15 00:08:00.863538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.520 [2024-05-15 00:08:00.864158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.864575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.864588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.520 [2024-05-15 00:08:00.864598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.520 [2024-05-15 00:08:00.864769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.520 [2024-05-15 00:08:00.864942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.520 [2024-05-15 00:08:00.864952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.520 [2024-05-15 00:08:00.864961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.520 [2024-05-15 00:08:00.867661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.520 [2024-05-15 00:08:00.876577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.520 [2024-05-15 00:08:00.877047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.877418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.877432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.520 [2024-05-15 00:08:00.877441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.520 [2024-05-15 00:08:00.877614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.520 [2024-05-15 00:08:00.877786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.520 [2024-05-15 00:08:00.877797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.520 [2024-05-15 00:08:00.877806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.520 [2024-05-15 00:08:00.880508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.520 [2024-05-15 00:08:00.889580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.520 [2024-05-15 00:08:00.890131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.890550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.520 [2024-05-15 00:08:00.890563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.520 [2024-05-15 00:08:00.890572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.520 [2024-05-15 00:08:00.890744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.520 [2024-05-15 00:08:00.890916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.520 [2024-05-15 00:08:00.890927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.520 [2024-05-15 00:08:00.890936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.893652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.902570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.903123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.903476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.903489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.903499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.903671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.903844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.903854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.903863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.906559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.915483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.916111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.916508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.916521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.916530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.916703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.916875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.916885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.916894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.919593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.928513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.929114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.929488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.929501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.929512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.929684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.929855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.929865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.929875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.932573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.941491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.942027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.942451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.942465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.942474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.942646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.942817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.942828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.942837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.945535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.954453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.955073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.955471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.955485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.955494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.955666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.955839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.955849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.955858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.958555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.967485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.968089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.968509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.968522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.968532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.968703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.968874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.968884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.968893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.971593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.980512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.981111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.981354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.981369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.981378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.981552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.981723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.981734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.981743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.984446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:00.993513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:00.994130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.994532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:00.994545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:00.994555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:00.994726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:00.994898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:00.994908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:00.994917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:00.997616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.521 [2024-05-15 00:08:01.006525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.521 [2024-05-15 00:08:01.007147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:01.007546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.521 [2024-05-15 00:08:01.007565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.521 [2024-05-15 00:08:01.007575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.521 [2024-05-15 00:08:01.007747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.521 [2024-05-15 00:08:01.007918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.521 [2024-05-15 00:08:01.007929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.521 [2024-05-15 00:08:01.007938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.521 [2024-05-15 00:08:01.010636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.019550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.019957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.020367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.020380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.020389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.020561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.020732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.020742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.020751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.023452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.032517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.033117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.033539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.033552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.033562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.033734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.033906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.033916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.033925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.036622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.045530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.046151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.046574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.046588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.046600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.046772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.046947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.046957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.046966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.049673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.058438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.059063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.059485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.059499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.059508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.059681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.059853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.059864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.059873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.062571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.071481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.072104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.072505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.072518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.072527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.072699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.072871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.072882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.072891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.075589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.084497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.085119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.085546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.085559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.085568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.085744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.085918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.085928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.085937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.088639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.097544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.522 [2024-05-15 00:08:01.098151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.098547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.522 [2024-05-15 00:08:01.098561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.522 [2024-05-15 00:08:01.098570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.522 [2024-05-15 00:08:01.098742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.522 [2024-05-15 00:08:01.098913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.522 [2024-05-15 00:08:01.098925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.522 [2024-05-15 00:08:01.098934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.522 [2024-05-15 00:08:01.101632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.522 [2024-05-15 00:08:01.110545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.783 [2024-05-15 00:08:01.111166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.111546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.111560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.783 [2024-05-15 00:08:01.111569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.783 [2024-05-15 00:08:01.111740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.783 [2024-05-15 00:08:01.111912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.783 [2024-05-15 00:08:01.111923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.783 [2024-05-15 00:08:01.111932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.783 [2024-05-15 00:08:01.114631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.783 [2024-05-15 00:08:01.123551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.783 [2024-05-15 00:08:01.124175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.124554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.124568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.783 [2024-05-15 00:08:01.124578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.783 [2024-05-15 00:08:01.124750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.783 [2024-05-15 00:08:01.124926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.783 [2024-05-15 00:08:01.124937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.783 [2024-05-15 00:08:01.124945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.783 [2024-05-15 00:08:01.127643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.783 [2024-05-15 00:08:01.136552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.783 [2024-05-15 00:08:01.137035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.137415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.137429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.783 [2024-05-15 00:08:01.137439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.783 [2024-05-15 00:08:01.137612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.783 [2024-05-15 00:08:01.137787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.783 [2024-05-15 00:08:01.137797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.783 [2024-05-15 00:08:01.137806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.783 [2024-05-15 00:08:01.140501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.783 [2024-05-15 00:08:01.149576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.783 [2024-05-15 00:08:01.150204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.150581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.150594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.783 [2024-05-15 00:08:01.150604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.783 [2024-05-15 00:08:01.150776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.783 [2024-05-15 00:08:01.150952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.783 [2024-05-15 00:08:01.150963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.783 [2024-05-15 00:08:01.150972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.783 [2024-05-15 00:08:01.153671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.783 [2024-05-15 00:08:01.162591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.783 [2024-05-15 00:08:01.163142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.163279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.783 [2024-05-15 00:08:01.163292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.783 [2024-05-15 00:08:01.163301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.783 [2024-05-15 00:08:01.163473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.783 [2024-05-15 00:08:01.163645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.783 [2024-05-15 00:08:01.163659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.163668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.166368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.175600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.176075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.176501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.176514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.176524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.176696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.176867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.176878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.176887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.179602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.188519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.189079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.189481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.189495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.189504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.189676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.189848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.189859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.189868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.192567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.201480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.202005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.202300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.202314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.202324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.202498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.202670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.202683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.202696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.205401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.214480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.214942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.215254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.215268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.215277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.215448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.215619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.215629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.215639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.218337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.227418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.227943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.228300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.228313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.228322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.228494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.228667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.228677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.228687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.231389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.240463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.241061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.241416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.241429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.241439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.241611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.241782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.241793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.241802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.244507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.253430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.253959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.254309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.254322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.254331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.254504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.254675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.254685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.254694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.257396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.266471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.267066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.267469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.267482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.267491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.267663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.267835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.267845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.267855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 [2024-05-15 00:08:01.270556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.784 [2024-05-15 00:08:01.279478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.784 [2024-05-15 00:08:01.280093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.280467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.784 [2024-05-15 00:08:01.280480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.784 [2024-05-15 00:08:01.280490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.784 [2024-05-15 00:08:01.280661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.784 [2024-05-15 00:08:01.280833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.784 [2024-05-15 00:08:01.280844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.784 [2024-05-15 00:08:01.280852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.784 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:00.784 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:26:00.784 00:08:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.785 [2024-05-15 00:08:01.283554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 [2024-05-15 00:08:01.292474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.293098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.293503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.293516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.293526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.293699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.293871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.293881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.293890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.296591] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 [2024-05-15 00:08:01.305510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.305982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.306346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.306359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.306368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.306540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.306713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.306723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.306732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.309435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 [2024-05-15 00:08:01.318515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.319075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.319426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.319440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.319449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.319621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.319792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.319803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.319815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.322520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.785 [2024-05-15 00:08:01.331161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.785 [2024-05-15 00:08:01.331436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.331917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.332342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.332355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.332365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.332537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.332708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.332719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.332727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.335426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.785 [2024-05-15 00:08:01.344342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.344892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.345126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.345138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.345147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.345324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.345497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.345507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.345516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.348217] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 [2024-05-15 00:08:01.357296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.357844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.358268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.358282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.358295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.358467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.358639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.358649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.358658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.785 [2024-05-15 00:08:01.361357] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.785 Malloc0 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.785 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.785 [2024-05-15 00:08:01.370270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.785 [2024-05-15 00:08:01.370844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.371185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.785 [2024-05-15 00:08:01.371202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:00.785 [2024-05-15 00:08:01.371212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:00.785 [2024-05-15 00:08:01.371384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:00.785 [2024-05-15 00:08:01.371555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.785 [2024-05-15 00:08:01.371565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.785 [2024-05-15 00:08:01.371574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.045 [2024-05-15 00:08:01.374272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:01.045 [2024-05-15 00:08:01.383181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.045 [2024-05-15 00:08:01.383664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.045 [2024-05-15 00:08:01.384015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.045 [2024-05-15 00:08:01.384027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81b9f0 with addr=10.0.0.2, port=4420 00:26:01.045 [2024-05-15 00:08:01.384037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81b9f0 is same with the state(5) to be set 00:26:01.045 [2024-05-15 00:08:01.384215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b9f0 (9): Bad file descriptor 00:26:01.045 [2024-05-15 00:08:01.384388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.045 [2024-05-15 00:08:01.384398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.045 [2024-05-15 00:08:01.384410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.045 [2024-05-15 00:08:01.387103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:01.045 [2024-05-15 00:08:01.391739] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:01.045 [2024-05-15 00:08:01.391982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.045 [2024-05-15 00:08:01.396168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.045 00:08:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3724653 00:26:01.045 [2024-05-15 00:08:01.512994] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.019 00:26:11.019 Latency(us) 00:26:11.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:11.019 Verification LBA range: start 0x0 length 0x4000 00:26:11.019 Nvme1n1 : 15.01 8656.43 33.81 12636.46 0.00 5991.39 1061.68 22229.81 00:26:11.019 =================================================================================================================== 00:26:11.019 Total : 8656.43 33.81 12636.46 0.00 5991.39 1061.68 22229.81 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.019 00:08:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.019 rmmod nvme_tcp 00:26:11.019 rmmod nvme_fabrics 00:26:11.019 rmmod nvme_keyring 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3725720 ']' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3725720 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3725720 ']' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3725720 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3725720 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3725720' 00:26:11.019 killing process with pid 3725720 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3725720 00:26:11.019 [2024-05-15 00:08:10.125155] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3725720 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.019 00:08:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.957 00:08:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.957 00:26:11.957 real 0m27.661s 00:26:11.957 user 1m2.193s 00:26:11.957 sys 0m8.346s 00:26:11.957 00:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:11.957 00:08:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:11.957 ************************************ 00:26:11.957 END TEST nvmf_bdevperf 00:26:11.957 ************************************ 00:26:11.957 00:08:12 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:11.957 00:08:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:11.957 00:08:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:11.957 00:08:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.957 ************************************ 00:26:11.957 START TEST nvmf_target_disconnect 00:26:11.957 ************************************ 00:26:11.957 00:08:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:12.218 * Looking for test storage... 00:26:12.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.218 00:08:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:12.219 00:08:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.825 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:18.826 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:18.826 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:18.826 Found net devices under 0000:af:00.0: cvl_0_0 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:18.826 Found net devices under 0000:af:00.1: cvl_0_1 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.826 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:26:19.085 00:26:19.085 --- 10.0.0.2 ping statistics --- 00:26:19.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.085 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:26:19.085 00:26:19.085 --- 10.0.0.1 ping statistics --- 00:26:19.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.085 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:19.085 ************************************ 00:26:19.085 START TEST nvmf_target_disconnect_tc1 00:26:19.085 ************************************ 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:26:19.085 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.345 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.345 [2024-05-15 00:08:19.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-05-15 00:08:19.761839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-05-15 00:08:19.761886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd24b0 with addr=10.0.0.2, port=4420 00:26:19.345 [2024-05-15 00:08:19.761954] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:19.345 [2024-05-15 00:08:19.761997] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:19.345 [2024-05-15 00:08:19.762024] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:19.345 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:19.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:19.345 Initializing NVMe Controllers 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:26:19.345 00:26:19.345 real 0m0.109s 00:26:19.345 user 0m0.039s 00:26:19.345 sys 0m0.069s 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.345 ************************************ 00:26:19.345 END TEST nvmf_target_disconnect_tc1 00:26:19.345 ************************************ 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:19.345 ************************************ 00:26:19.345 START TEST nvmf_target_disconnect_tc2 00:26:19.345 ************************************ 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3731039 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3731039 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3731039 ']' 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:19.345 00:08:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:19.345 [2024-05-15 00:08:19.924681] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:26:19.345 [2024-05-15 00:08:19.924727] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.602 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.602 [2024-05-15 00:08:20.014133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.602 [2024-05-15 00:08:20.106838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.602 [2024-05-15 00:08:20.106874] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.602 [2024-05-15 00:08:20.106884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.602 [2024-05-15 00:08:20.106892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.602 [2024-05-15 00:08:20.106916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.602 [2024-05-15 00:08:20.107475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:19.602 [2024-05-15 00:08:20.107564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:19.602 [2024-05-15 00:08:20.107597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.602 [2024-05-15 00:08:20.107599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:20.168 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:20.168 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:20.168 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.168 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.168 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 Malloc0 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 [2024-05-15 00:08:20.806735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 [2024-05-15 00:08:20.834770] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:20.426 [2024-05-15 00:08:20.835015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=3731258 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:20.426 00:08:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.426 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.329 00:08:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 3731039 00:26:22.329 00:08:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 [2024-05-15 00:08:22.864471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Read completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.329 starting I/O failed 00:26:22.329 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 [2024-05-15 00:08:22.864699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 [2024-05-15 00:08:22.864923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Read completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 Write completed with error (sct=0, sc=8) 00:26:22.330 starting I/O failed 00:26:22.330 [2024-05-15 00:08:22.865140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:22.330 [2024-05-15 00:08:22.865579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.866020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.866033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.866463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.866870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.866910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.867411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.867884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.867923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.868446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.868948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.868987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.869492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.869900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.869938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.870343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.870636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.870674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.871067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.871530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.871570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.872061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.872536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.872575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.873088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.873564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.873604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.874000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.874410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.874449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.874933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.875294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.875310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.875696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.876152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.876199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.330 qpair failed and we were unable to recover it. 00:26:22.330 [2024-05-15 00:08:22.876578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.330 [2024-05-15 00:08:22.877056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.877094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.877512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.877986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.878025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.878440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.878904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.878942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.879371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.879772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.879810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.880306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.880710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.880748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.881258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.881611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.881649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.882101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.882521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.882537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.882941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.883359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.883375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.883781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.884069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.884085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.884503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.884795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.884811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.885218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.885621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.885637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.885997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.886334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.886373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.886853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.887320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.887359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.887790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.888262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.888301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.888742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.889185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.889248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.889707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.890106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.890122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.891036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.891074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.891476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.891873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.891911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.892315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.892764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.892808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.893241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.893590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.893629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.894044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.894476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.894516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.894913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.895386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.895426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.895836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.896306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.896344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.896598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.897068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.897106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.897589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.898003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.898041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.898452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.898932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.898971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.899310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.899712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.899750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.900252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.900666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.900705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.901159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.901571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.901611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.902103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.902492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.331 [2024-05-15 00:08:22.902531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.331 qpair failed and we were unable to recover it. 00:26:22.331 [2024-05-15 00:08:22.902932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.903328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.903367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.903848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.904241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.904280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.904666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.905087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.905124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.905565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.905986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.906024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.906481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.906823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.906862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.907222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.907613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.907652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.908045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.908425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.908464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.908945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.909415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.909474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.909960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.910432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.910471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.910881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.911291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.911307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.911686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.912155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.912201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.912454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.912844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.912882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.913340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.913748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.913764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.914174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.914580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.914596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.914926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.915354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.915394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.915656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.916048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.916063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.916397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.916842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.916880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.917380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.917826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.332 [2024-05-15 00:08:22.917864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.332 qpair failed and we were unable to recover it. 00:26:22.332 [2024-05-15 00:08:22.918295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.918623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.918661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.597 [2024-05-15 00:08:22.919085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.919435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.919451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.597 [2024-05-15 00:08:22.919821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.920226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.920265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.597 [2024-05-15 00:08:22.920674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.921149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.921188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.597 [2024-05-15 00:08:22.921677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.922173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.922218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.597 [2024-05-15 00:08:22.922674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.923070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.597 [2024-05-15 00:08:22.923109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.597 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.923593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.924051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.924067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.924423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.924769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.924784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.925213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.925697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.925734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.925935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.926410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.926449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.926924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.927369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.927409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.927869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.928366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.928405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.928908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.929292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.929308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.929655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.930102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.930140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.930627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.931026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.931065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.931389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.931863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.931902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.932300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.932770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.932809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.933200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.933581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.933620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.934076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.934469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.934508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.934894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.935104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.935142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.935573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.935899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.935936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.936352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.936822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.936866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.937345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.937841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.937879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.938336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.938786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.938824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.939302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.939753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.939792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.940246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.940629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.940671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.941036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.941502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.941540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.941928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.942306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.942344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.942731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.943130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.943169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.943641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.944097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.944113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.944489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.944905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.944943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.945373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.945840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.945879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.946358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.946852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.598 [2024-05-15 00:08:22.946891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.598 qpair failed and we were unable to recover it. 00:26:22.598 [2024-05-15 00:08:22.947214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.947564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.947603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.948059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.948428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.948468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.948947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.949396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.949436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.949910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.950371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.950387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.950817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.951285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.951324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.951741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.952213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.952252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.952708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.953176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.953224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.953651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.954070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.954109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.954501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.954737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.954775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.955126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.955543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.955584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.955993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.956350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.956366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.956806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.957224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.957240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.957682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.958064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.958102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.958581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.959031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.959072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.959450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.959857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.959896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.960321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.960768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.960807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.961205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.961392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.961430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.961889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.962356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.962395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.962746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.963141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.963180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.963625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.964023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.964062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.964520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.964990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.965028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.965430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.965801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.965839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.966320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.966790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.966828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.967304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.967776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.967814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.968216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.968610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.968648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.969036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.969373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.969412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.969804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.970141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.970180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.970516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.970892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.970930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.971411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.971802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.599 [2024-05-15 00:08:22.971818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.599 qpair failed and we were unable to recover it. 00:26:22.599 [2024-05-15 00:08:22.972176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.972519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.972558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.972938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.973408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.973448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.973907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.974369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.974385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.974815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.975283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.975322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.975737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.975994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.976033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.976440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.976912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.976951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.977432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.977927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.977965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.978418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.978886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.978925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.979346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.979816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.979854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.980310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.980691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.980730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.981131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.981623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.981668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.982025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.982446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.982486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.982885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.983348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.983364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.983776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.984235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.984274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.984675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.985072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.985111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.985561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.985937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.985975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.986446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.986886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.986924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.987349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.987816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.987854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.988269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.988738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.988776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.989254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.989725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.989763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.990242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.990681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.990699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.991133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.991579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.991618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.992025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.992452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.992492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.992878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.993292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.993331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.993809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.994215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.994254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.994713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.995181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.995242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.995674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.996155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.996200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.996655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.997102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.997140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.997625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.998104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.998132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.600 qpair failed and we were unable to recover it. 00:26:22.600 [2024-05-15 00:08:22.998479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-05-15 00:08:22.998873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:22.998911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:22.999394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:22.999887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:22.999927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.000411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.000808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.000847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.001254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.001742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.001781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.002258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.002708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.002747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.003225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.003699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.003738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.004242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.004690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.004728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.005134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.005608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.005648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.006107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.006548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.006565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.006999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.007393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.007432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.007879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.008332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.008348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.008634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.008997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.009036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.009445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.009838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.009877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.010353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.010765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.010805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.011220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.011690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.011730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.012210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.012682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.012721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.013204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.013624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.013663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.014142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.014556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.014595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.015065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.015437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.015454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.015884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.016261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.016278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.016659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.016897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.016936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.017289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.017714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.017753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.018212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.018606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.018622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.018972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.019401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.019442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.019873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.020182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.020202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.020611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.021028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.021066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.021423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.021767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.021806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.022277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.022644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.022682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.023104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.023552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.023593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.601 [2024-05-15 00:08:23.024047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.024446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-05-15 00:08:23.024485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.601 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.024867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.025281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.025298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.025734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.026137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.026176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.026616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.027073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.027112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.027506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.027849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.027888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.028393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.028866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.028904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.029362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.029811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.029850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.030271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.030740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.030778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.031235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.031641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.031680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.032160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.032615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.032631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.033076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.033522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.033561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.034061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.034420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.034459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.034783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.035262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.035301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.035702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.036166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.036220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.036631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.037033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.037071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.037531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.038000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.038038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.038497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.038936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.038975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.039413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.039825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.039863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.040297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.040744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.040782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.041102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.041530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.042008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.042473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.042512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.042990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.043462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.043501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.043924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.044381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.044397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.044750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.045123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.045162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.045639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.046037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-05-15 00:08:23.046076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.602 qpair failed and we were unable to recover it. 00:26:22.602 [2024-05-15 00:08:23.046471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.046897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.046936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.047344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.047673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.047711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.047960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.048383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.048423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.048876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.049346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.049385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.049817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.050119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.050158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.050625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.051045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.051084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.051511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.051957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.051996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.052383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.052835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.052873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.053349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.053823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.053862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.054296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.054769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.054807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.055288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.055781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.055820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.056304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.056691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.056730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.057138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.057539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.057579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.058034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.058473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.058489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.058890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.059245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.059285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.059739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.060046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.060084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.060462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.060896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.060912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.061369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.061836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.061874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.062255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.062692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.062730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.063216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.063678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.063717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.064128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.064394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.064411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.064814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.065241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.065281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.065551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.065970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.066009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.066464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.066654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.066692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.067169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.067647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.067686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.068115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.068583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.068622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.069100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.069503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.069543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.070025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.070410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.070449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.070923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.071392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.071432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-05-15 00:08:23.071857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.072303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.603 [2024-05-15 00:08:23.072320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.072525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.072809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.072847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.073344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.073749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.073788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.074265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.074737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.074776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.075172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.075656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.075672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.076022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.076508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.076547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.077005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.077470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.077486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.077834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.078210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.078249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.078633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.079037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.079075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.079475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.079940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.079978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.080474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.080896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.080940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.081419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.081809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.081848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.082343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.082752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.082790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.083189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.083679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.083718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.084201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.084569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.084585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.084936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.085211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.085250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.085704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.086173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.086221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.086623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.086996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.087035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.087416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.087596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.087612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.088022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.088469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.088508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.088921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.089248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.089288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.089793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.090181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.090227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.090694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.091118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.091134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.091566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.092034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.092073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.092488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.092885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.092902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.093254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.093664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.093703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.094036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.094421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.094460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.094966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.095370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.095409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.095886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.096332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.096371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.096849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.097244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.604 [2024-05-15 00:08:23.097284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-05-15 00:08:23.097720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.098150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.098186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.098534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.098900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.098938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.099362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.099827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.100235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.100697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.100735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.101142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.101550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.101590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.102010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.102490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.102506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.102867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.103272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.103312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.103817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.104302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.104341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.104799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.105210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.105249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.105583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.106053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.106091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.106556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.106816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.106854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.107315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.107782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.107821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.108296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.108696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.108734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.109153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.109624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.109641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.110075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.110545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.110583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.111096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.111566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.111606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.111853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.112322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.112361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.112821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.113213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.113252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.113729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.114121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.114159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.114529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.115002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.115040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.115512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.115863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.115878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.116233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.116646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.116663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.117034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.117505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.117522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.117669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.118072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.118088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.118491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.118962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.119000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.119473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.119874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.119890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.120171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.120606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.120623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.121002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.121392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.121431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.121838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.122333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.605 [2024-05-15 00:08:23.122372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.605 qpair failed and we were unable to recover it. 00:26:22.605 [2024-05-15 00:08:23.122823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.123256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.123295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.123752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.124234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.124250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.124601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.124979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.124998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.125372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.125760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.125776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.126211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.126707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.126746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.127212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.127637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.127653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.128034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.128458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.128474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.128671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.129076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.129114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.129543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.130010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.130026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.130431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.130846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.130884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.131272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.131661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.131677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.132080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.132500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.132517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.132872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.133273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.133289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.133723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.133930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.133969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.134437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.134891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.134907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.135266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.135607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.135623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.135923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.136345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.136362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.136711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.137117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.137133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.137550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.137883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.137899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.138246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.138659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.138675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.139047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.139475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.139492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.139917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.140260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.140276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.140609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.141073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.141089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.141481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.141953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.141991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.142323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.142764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.142779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.606 qpair failed and we were unable to recover it. 00:26:22.606 [2024-05-15 00:08:23.143080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.606 [2024-05-15 00:08:23.143502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.143541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.144021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.144398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.144438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.144839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.145196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.145212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.145617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.145893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.145932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.146422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.146823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.146839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.147266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.147584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.147623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.148078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.148541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.148557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.148907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.149243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.149259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.149686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.150022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.150038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.150463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.150864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.150880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.151222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.151704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.151720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.152124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.152475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.152491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.152825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.153248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.153264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.153685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.154131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.154169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.154577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.155005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.155021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.155452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.155900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.155916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.156296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.156648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.156664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.157068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.157426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.157443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.157847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.158182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.158203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.158607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.159030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.159046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.159450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.159858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.159874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.160205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.160540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.160556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.160909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.161359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.161398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.161872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.162238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.162254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.163064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.163080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.163414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.163869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.163885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.164300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.164756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.164794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.165272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.165740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.165756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.166120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.166486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.166505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.607 qpair failed and we were unable to recover it. 00:26:22.607 [2024-05-15 00:08:23.166699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.607 [2024-05-15 00:08:23.167121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.167137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.167548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.167944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.167982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.168430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.168773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.168789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.169159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.169516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.169532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.169869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.170163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.170179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.170393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.170756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.170795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.171279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.171722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.171760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.172232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.172635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.172651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.173053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.173202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.173218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.173570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.173900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.173919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.174322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.174601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.174617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.174828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.175172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.175188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.175533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.175935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.175974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.176423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.176776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.176792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.177149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.177435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.177452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.177805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.178154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.178170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.178579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.178931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.178947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.179296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.179666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.179682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.180024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.180413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.180454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.180798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.181168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.181184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.181543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.181967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.182005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.608 [2024-05-15 00:08:23.182411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.182875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-15 00:08:23.182891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.608 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.183319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.183602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.183618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.183978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.184347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.184364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.184700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.184990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.185006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.185411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.185757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.185795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.186274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.186588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.186603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.186951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.187356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.187372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.187720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.188064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.188102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.188581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.189021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.189037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.189393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.189749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.189788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.190266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.190734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.190772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.191251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.191745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.191784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.192264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.192715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.192754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.193233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.193685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.193724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.194209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.194654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.194692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.195122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.195528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.195544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.195902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.196212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.196229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.196591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.196998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.197036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.197448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.197893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.197909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.198329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.198685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.198723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.199208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.199602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.199643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.200046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.200380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.200420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.200796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.201277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.201677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.202024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.202040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.202464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.202886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.202902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.203168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.203388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.203404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.874 [2024-05-15 00:08:23.203737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.204164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.874 [2024-05-15 00:08:23.204219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.874 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.204697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.205117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.205155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.205503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.205965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.206004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.206478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.206953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.206992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.207333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.207723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.207762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.208161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.208665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.208705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.209076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.209494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.209533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.210000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.210470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.210510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.210774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.211244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.211283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.211694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.212176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.212313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.212784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.213231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.213270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.213616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.213993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.214031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.214485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.214838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.214877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.215212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.215684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.215729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.216110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.216557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.216596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.217077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.217477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.217494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.217942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.218266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.218305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.218692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.219160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.219207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.219686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.220135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.220174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.220711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.221116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.221153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.221575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.222046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.222085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.222559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.222946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.222984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.223437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.223892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.223930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.224410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.224800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.224838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.225332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.225660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.225698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.226099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.226566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.226618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.227050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.227543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.227581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.228004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.228334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.228374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.228830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.229322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.229362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.229833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.230280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.230319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.875 qpair failed and we were unable to recover it. 00:26:22.875 [2024-05-15 00:08:23.230647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.231069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.875 [2024-05-15 00:08:23.231085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.231517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.231911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.231950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.232343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.232736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.232774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.233246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.233715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.233753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.234236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.234575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.234592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.234964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.235278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.235317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.235742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.236205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.236221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.236661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.237051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.237089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.237519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.237879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.237896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.238330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.238775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.238813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.239288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.239743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.239782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.240258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.240706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.240745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.241132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.241555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.241594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.242048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.242482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.242499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.242936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.243358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.243398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.243816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.244222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.244238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.244704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.245108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.245146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.245616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.246031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.246070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.246550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.246855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.246894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.247088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.247558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.247597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.248090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.248583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.248622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.249037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.249426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.249465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.249921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.250297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.250336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.250816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.251282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.251321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.251826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.252154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.252199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.252682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.253071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.253110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.253592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.254061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.254077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.254430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.254798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.254836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.255316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.255711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.255749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.256234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.256652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.256691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.876 qpair failed and we were unable to recover it. 00:26:22.876 [2024-05-15 00:08:23.256936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.876 [2024-05-15 00:08:23.257409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.257449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.257853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.258249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.258288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.258688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.259182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.259228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.259636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.260104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.260143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.260559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.260956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.261000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.261483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.261833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.261871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.262296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.262703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.262741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.263238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.263706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.263744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.264220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.264691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.264737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.265069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.265318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.265358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.265569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.265942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.265981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.266374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.266752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.266791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.267265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.267593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.267631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.268107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.268596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.268635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.269012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.269303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.269319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.269766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.270178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.270229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.270711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.271156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.271203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.271625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.272014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.272031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.272457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.272819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.272857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.273337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.273796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.273833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.274262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.274734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.274783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.275159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.275585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.275601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.276058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.276531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.276571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.276903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.277274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.277291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.277721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.278186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.278231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.278694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.279088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.279126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.279629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.279977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.280016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.280423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.280891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.280930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.281353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.281676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.281692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.877 qpair failed and we were unable to recover it. 00:26:22.877 [2024-05-15 00:08:23.282098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.877 [2024-05-15 00:08:23.282527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.282566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.283049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.283462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.283501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.283921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.284388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.284429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.285301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.285341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.285807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.286271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.286288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.286743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.287092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.287130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.287559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.288029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.288067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.288546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.289017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.289056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.289510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.289892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.289930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.290406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.290874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.290912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.291313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.291760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.291798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.292209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.292680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.292718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.293186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.293596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.293640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.294073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.294520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.294559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.295043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.295491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.295530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.295982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.296379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.296419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.296875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.297327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.297367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.297845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.298318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.298357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.298764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.299225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.299264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.299758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.300154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.300199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.300620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.300994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.301033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.301438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.301880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.301896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.302247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.302509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.302525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.302961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.303413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.303452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.878 qpair failed and we were unable to recover it. 00:26:22.878 [2024-05-15 00:08:23.303691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.878 [2024-05-15 00:08:23.304045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.304083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.304541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.305008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.305047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.305393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.305649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.305693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.306077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.306547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.306586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.306970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.307417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.307456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.307884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.308297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.308336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.308745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.309110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.309126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.309557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.309956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.309995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.310389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.310793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.310832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.311210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.311617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.311655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.312060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.312549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.312588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.312854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.313268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.313307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.313784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.314254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.314300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.314757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.315234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.315273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.315751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.316009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.316025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.316434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.316756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.316794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.317247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.317662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.317698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.318130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.318533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.318572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.318991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.319458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.319498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.319905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.320286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.320324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.320794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.321260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.321299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.321781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.322225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.322264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.322674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.323068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.323106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.323594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.323973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.324011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.324467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.324934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.324973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.325368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.325804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.325843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.326293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.326724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.326762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.327148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.327479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.327518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.327926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.328308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.328348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.879 [2024-05-15 00:08:23.328829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.329297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.879 [2024-05-15 00:08:23.329335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.879 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.329766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.330145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.330183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.330596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.331091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.331129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.331613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.332056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.332094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.332577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.333057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.333095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.333604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.333976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.334015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.334418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.334901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.334940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.335334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.335804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.335843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.336257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.336687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.336726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.337142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.337574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.337614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.338058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.338435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.338474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.338911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.339303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.339342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.339798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.340247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.340287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.340765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.341105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.341143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.341609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.342019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.342058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.342466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.342888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.342927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.343322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.343795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.343833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.344264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.344695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.344733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.345121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.345571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.345611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.346066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.346540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.346580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.347005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.347460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.347499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.347985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.348451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.348490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.348919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.349371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.349387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.349726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.350038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.350077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.350554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.351052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.351090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.351571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.351920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.351958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.352381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.352778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.352816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.353080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.353547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.353587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.354042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.354421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.354460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.354915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.355378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.355417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.880 qpair failed and we were unable to recover it. 00:26:22.880 [2024-05-15 00:08:23.355606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.880 [2024-05-15 00:08:23.356004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.356042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.356509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.356976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.357018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.357388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.357848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.357888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.358227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.358701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.358739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.359217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.359666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.359715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.360075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.360452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.360491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.360993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.361383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.361399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.361773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.362218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.362257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.362722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.363184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.363204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.363587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.363873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.363911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.364227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.364623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.364661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.365138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.365527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.365569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.365770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.366166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.366181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.366629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.366978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.366994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.367417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.367787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.367803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.368244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.368665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.368681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.369012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.369356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.369372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.369728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.370136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.370152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.370499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.370871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.370887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.371312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.371665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.371681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.371988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.372331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.372347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.372698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.373079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.373095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.373428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.373798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.373814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.374217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.374581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.374597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.374884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.375304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.375321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.375745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.376098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.376114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.376324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.376656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.376672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.377075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.377402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.377418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.377839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.378258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.378275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.378679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.379030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.379045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.881 qpair failed and we were unable to recover it. 00:26:22.881 [2024-05-15 00:08:23.379397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.881 [2024-05-15 00:08:23.379761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.379777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.380111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.380554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.380570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.380997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.381341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.381357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.381786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.382137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.382153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.382503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.382919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.382935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.383272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.383714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.383730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.384155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.384556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.384572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.384936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.385355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.385372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.385657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.386023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.386039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.386464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.386884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.386900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.387200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.387620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.387636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.388064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.388463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.388479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.388911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.389282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.389298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.389725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.390151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.390167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.390521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.390852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.390868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.391236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.391591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.391607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.391802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.392199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.392215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.392641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.393083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.393099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.393503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.393928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.393944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.394312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.394712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.394728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.395154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.395520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.395535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.395957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.396290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.396306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.396661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.397077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.397092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.397513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.397936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.397951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.398399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.398752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.398767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.399140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.399420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.399438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.399772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.400171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.400187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.400562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.400940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.400956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.401356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.401781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.401797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.402146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.402567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.882 [2024-05-15 00:08:23.402584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.882 qpair failed and we were unable to recover it. 00:26:22.882 [2024-05-15 00:08:23.402942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.403366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.403382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.403811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.404260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.404276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.404566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.404911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.404928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.405267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.405609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.405625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.406047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.406420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.406437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.406649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.407072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.407088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.407507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.407833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.407849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.408184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.408618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.408634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.409043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.409469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.409485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.409841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.410240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.410256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.410681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.411030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.411046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.411395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.411819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.411835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.412277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.412676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.412692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.413072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.413476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.413492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.413890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.414261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.414278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.414683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.415083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.415099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.415532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.415876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.415915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.416415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.416871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.416909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.417295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.417752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.417790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.418267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.418694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.418732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.419133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.419604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.419644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.420135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.420621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.420660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.421084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.421481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.421521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.421979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.422369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.422409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.422864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.423323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.423362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.883 qpair failed and we were unable to recover it. 00:26:22.883 [2024-05-15 00:08:23.423847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.883 [2024-05-15 00:08:23.424169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.424215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.424676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.424936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.424975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.425443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.425697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.425736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.426216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.426595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.426635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.427053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.427286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.427303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.427723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.428200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.428240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.428569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.428816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.428855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.429277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.429680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.429719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.430206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.430596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.430635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.431114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.431532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.431572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.432051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.432448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.432488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.432906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.433311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.433351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.433775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.434241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.434281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.434739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.435214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.435254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.435715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.436199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.436238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.436698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.437091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.437129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.437544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.438026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.438065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.438462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.438908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.438947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.439425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.439899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.439943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.440264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.440691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.440731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.441233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.441704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.441721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.442070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.442357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.442376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.442811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.443218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.443260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.443684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.444012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.444051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.444548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.444939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.444978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.445449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.445740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.445779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.446235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.446625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.446663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.447139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.447538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.447578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.448059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.448433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.448472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.448950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.449408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.884 [2024-05-15 00:08:23.449424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.884 qpair failed and we were unable to recover it. 00:26:22.884 [2024-05-15 00:08:23.449861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.450279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.450319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.450731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.451107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.451146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.451507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.451909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.451948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.452423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.452917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.452956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.453416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.453786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.453825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.454283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.454447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.454463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.454890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.455221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.455237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.455541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.455944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.455960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.456294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.456653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.456669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.457117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.457493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.457532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:22.885 [2024-05-15 00:08:23.457989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.458338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.885 [2024-05-15 00:08:23.458377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:22.885 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.458847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.459271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.459287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.459666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.460085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.460101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.460531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.461005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.461044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.461522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.461816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.461855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.462339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.462739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.462755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.463138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.463570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.463609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.464086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.464504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.464544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.465024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.465427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.465443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.465790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.466214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.466254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.466654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.467101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.467139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.467611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.468067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.468106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.468595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.468990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.469029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.469510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.469903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.469941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.470199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.470647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.470687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.471162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.471564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.471604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.472014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.472433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.472472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.472804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.473209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.473248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.473706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.474084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.474122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.474604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.474819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.474858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.475335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.475808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.475846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.476300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.476635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.476674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.477079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.477435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.477452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.477739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.478139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.478177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.478606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.479005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.479021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.479466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.479821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.479837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.480189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.480645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.480684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.481140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.481623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.481663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.482447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.482487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.482980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.483425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.151 [2024-05-15 00:08:23.483464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.151 qpair failed and we were unable to recover it. 00:26:23.151 [2024-05-15 00:08:23.483875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.484326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.484365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.484843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.485317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.485356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.485832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.486282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.486332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.486806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.487278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.487318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.487796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.488218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.488258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.489063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.489102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.489506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.489969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.490008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.490434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.490900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.490938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.491371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.491816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.491854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.492344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.492736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.492774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.493234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.493476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.493493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.493901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.494307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.494346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.494833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.495305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.495351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.495855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.496321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.496337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.496723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.497115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.497154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.497549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.497790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.497829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.498306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.498714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.498752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.499231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.499573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.499611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.499950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.500408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.500447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.500909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.501301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.501340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.501815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.501998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.502037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.502420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.502864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.502903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.503373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.503736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.503774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.504259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.504691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.504706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.504902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.505309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.505348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.505736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.506126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.506164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.506644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.507114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.507153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.507620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.508016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.508061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.508468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.508940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.508979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.152 qpair failed and we were unable to recover it. 00:26:23.152 [2024-05-15 00:08:23.509456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.509801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.152 [2024-05-15 00:08:23.509840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.510297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.510743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.510781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.511204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.511659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.511675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.512078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.512539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.512579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.513068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.513516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.513556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.514034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.514507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.514547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.514948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.515129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.515168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.515665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.516058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.516097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.516550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.516942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.516981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.517457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.517834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.517873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.518328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.518800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.518839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.519294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.519695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.519734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.520189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.520623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.520662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.521138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.521570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.521609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.522089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.522488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.522529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.522842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.523273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.523313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.523812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.524291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.524329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.524808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.525208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.525247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.525678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.526061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.526101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.526599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.527100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.527140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.527628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.528104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.528143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.528649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.529099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.529138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.529615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.530019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.530058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.530492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.530946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.530984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.531439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.531857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.531895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.532387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.532857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.532873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.533245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.533471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.533509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.533923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.534392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.534433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.534839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.535306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.535345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.535826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.536008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.536047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.153 qpair failed and we were unable to recover it. 00:26:23.153 [2024-05-15 00:08:23.536525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.536904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.153 [2024-05-15 00:08:23.536943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.537400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.537861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.537900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.538355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.538747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.538786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.539242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.539709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.539725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.540112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.540581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.540626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.541130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.541583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.541623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.542002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.542329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.542369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.542767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.543164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.543210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.543666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.544136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.544175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.544588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.545001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.545041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.545453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.545917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.545956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.546379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.546847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.546886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.547390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.547628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.547667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.548169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.548592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.548631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.549039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.549510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.549549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.549909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.550380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.550420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.550897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.551365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.551404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.551880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.552346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.552386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.552653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.553123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.553162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.553640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.554137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.554180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.554594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.555058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.555097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.555558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.556028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.556067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.556523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.556979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.557018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.557431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.557899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.557937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.558417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.558860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.558876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.559305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.559516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.559532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.559897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.560390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.560429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.560820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.561161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.561207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.561528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.561999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.562038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.562494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.562892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.562931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.154 qpair failed and we were unable to recover it. 00:26:23.154 [2024-05-15 00:08:23.563408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.154 [2024-05-15 00:08:23.563859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.563898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.564330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.564719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.564757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.565236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.565703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.565719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.566158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.566587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.566627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.566978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.567440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.567456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.567912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.568332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.568384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.568814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.569241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.569258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.569588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.569981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.570020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.570420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.570660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.570699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.571201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.571662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.571709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.572063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.572527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.572566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.572964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.573356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.573395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.573785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.574188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.574236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.574713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.575184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.575230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.575663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.576050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.576089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.576514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.576971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.576987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.577394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.577844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.577883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.578147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.578575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.578614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.578919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.579341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.579357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.579761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.580254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.580295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.580776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.581223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.581263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.581529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.581987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.582025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.582502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.582887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.582903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.583318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.583694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.583733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.155 qpair failed and we were unable to recover it. 00:26:23.155 [2024-05-15 00:08:23.584187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.584681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.155 [2024-05-15 00:08:23.584720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.585107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.585486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.585532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.585988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.586449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.586487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.586968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.587439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.587477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.587793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.588222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.588261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.588671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.589000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.589039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.589465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.589798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.589837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.590221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.590694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.590732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.591188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.591664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.591709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.592136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.592566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.592605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.593078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.593550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.593590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.594055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.594519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.594558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.595051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.595541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.595580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.596038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.596257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.596297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.596773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.597151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.597203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.597586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.597976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.598014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.598420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.598889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.598928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.599413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.599910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.599948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.600350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.600820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.600859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.601353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.601799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.601838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.602217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.602680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.602719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.603175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.603423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.603461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.603937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.604350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.604390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.604793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.605262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.605301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.605790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.606284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.606324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.606807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.607205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.607245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.607707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.608180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.608225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.608679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.609051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.609090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.609546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.610002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.610018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.156 [2024-05-15 00:08:23.610426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.610778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.156 [2024-05-15 00:08:23.610817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.156 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.611295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.611633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.611649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.612021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.612452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.612491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.612905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.613365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.613405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.613871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.614394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.614434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.614790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.615121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.615137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.615483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.615929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.615945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.616370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.616743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.616759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.617186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.617572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.617589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.617969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.618316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.618691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.619056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.619072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.619421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.619841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.619857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.620290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.620550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.620566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.620969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.621194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.621211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.621499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.621919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.621935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.622291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.622631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.622647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.623050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.623469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.623485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.623916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.624341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.624357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.624729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.625070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.625085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.625464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.625911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.625927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.626261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.626685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.626701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.627071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.627493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.627510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.627936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.628232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.628248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.628662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.629008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.629027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.629453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.629874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.629890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.630295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.630717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.630733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.631136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.631556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.631572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.631906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.632327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.632343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.632694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.633136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.633152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.633575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.633861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.633877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.157 qpair failed and we were unable to recover it. 00:26:23.157 [2024-05-15 00:08:23.634258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.157 [2024-05-15 00:08:23.634681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.634698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.635102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.635507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.635523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.635985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.636001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.636194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.636616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.636634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.637005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.637483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.637499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.637947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.638277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.638293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.638697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.639097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.639113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.639484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.639904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.639920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.640293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.640716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.640732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.641010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.641431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.641447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.641640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.641989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.642004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.642356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.642764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.642780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.643152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.643455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.643471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.643809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.644150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.644165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.644622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.645065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.645081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.645477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.645823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.645839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.646264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.646686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.646702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.647149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.647550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.647566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.647969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.648394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.648410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.648832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.649231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.649247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.649677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.650005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.650021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.650351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.650762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.650778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.651189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.651670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.651686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.652099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.652498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.652515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.652943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.653272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.653288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.653661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.654084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.654101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.654452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.654786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.654802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.655172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.655601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.655617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.655815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.656253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.656270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.656575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.657029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.158 [2024-05-15 00:08:23.657045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.158 qpair failed and we were unable to recover it. 00:26:23.158 [2024-05-15 00:08:23.657467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.657869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.657886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.658234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.658603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.658619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.659034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.659327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.659344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.659772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.660143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.660159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.660503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.660903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.660919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.661220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.661516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.661532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.661824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.662239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.662256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.662681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.663087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.663103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.663483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.663836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.663852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.664198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.664575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.664591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.664947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.665348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.665364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.665788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.666115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.666131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.666534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.666897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.666913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.667256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.667621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.667637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.667982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.668333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.668350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.668710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.669059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.669076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.669481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.669905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.669944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.670371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.670824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.670862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.671268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.671670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.671709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.671902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.672309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.672348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.672762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.673210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.673250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.673658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.674047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.674087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.674492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.674861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.674877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.675298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.675602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.675641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.676064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.676445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.676489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.676854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.677254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.159 [2024-05-15 00:08:23.677292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.159 qpair failed and we were unable to recover it. 00:26:23.159 [2024-05-15 00:08:23.677613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.677986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.678024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.678428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.678909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.678947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.679335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.679742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.679781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.680186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.680535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.680574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.681077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.681456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.681495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.681906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.682375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.682414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.682815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.683283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.683321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.683746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.684159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.684203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.684605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.685074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.685112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.685742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.686076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.686115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.686550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.686887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.686926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.687371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.687661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.687700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.688174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.688592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.688633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.689035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.689509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.689549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.689901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.690382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.690421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.690758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.691235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.691274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.691542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.691878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.691917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.692307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.692724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.692763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.693150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.693562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.693602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.693984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.694433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.694473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.695738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.695770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.696136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.696546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.696588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.696787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.697257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.697297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.697634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.697998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.698038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.698518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.698919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.698959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.700420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.700811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.700830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.701189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.701499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.701546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.702729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.705603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.705636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.706022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.706482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.706525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.160 [2024-05-15 00:08:23.706946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.707381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.160 [2024-05-15 00:08:23.707422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.160 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.707770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.708219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.708259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.708730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.709133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.709172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.709548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.709888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.709928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.710329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.710748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.710787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.711243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.711635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.711674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.712074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.712472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.712512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.712919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.713279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.713319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.713747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.714066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.714082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.714380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.714729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.714768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.715159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.715559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.715600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.716043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.716386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.716426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.716827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.717247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.717287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.717636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.718043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.718081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.718491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.718890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.718929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.719333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.719664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.719703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.720092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.720402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.720443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.720848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.721152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.721168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.721479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.721829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.721868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.722278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.722518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.722558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.722914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.723295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.723341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.723729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.724126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.724142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.724548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.724975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.724992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.725291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.725565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.725581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.725914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.726292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.726347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.726763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.727133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.727172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.727588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.727900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.727916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.728077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.728423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.728440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.728774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.729116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.729155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.729647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.730066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.730105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.161 qpair failed and we were unable to recover it. 00:26:23.161 [2024-05-15 00:08:23.730469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.730801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.161 [2024-05-15 00:08:23.730840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.731178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.731498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.731537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.731926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.732294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.732333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.732736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.733171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.733187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.733392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.733816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.733832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.734217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.734557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.734574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.734856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.735276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.162 [2024-05-15 00:08:23.735292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.162 qpair failed and we were unable to recover it. 00:26:23.162 [2024-05-15 00:08:23.735635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.736037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.736053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.736458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.736857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.736874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.737279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.737630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.737646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.737934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.738219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.738236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.738576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.738971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.739010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.739429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.739750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.739789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.740289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.740548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.740587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.741053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.741510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.741550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.741940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.742230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.742271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.742455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.742860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.742899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.743303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.743717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.743755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.744216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.744690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.744729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.745069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.745526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.745565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.745971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.746418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.746458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.746677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a39140 is same with the state(5) to be set 00:26:23.428 [2024-05-15 00:08:23.747256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.747668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.747722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.748092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.748565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.748606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.748792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.749155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.749204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.749676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.750116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.750155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.750519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.750910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.750926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.428 qpair failed and we were unable to recover it. 00:26:23.428 [2024-05-15 00:08:23.751355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.428 [2024-05-15 00:08:23.751640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.751657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.752038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.752322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.752338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.752767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.753119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.753159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.753440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.753851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.753890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.754233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.754634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.754673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.755176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.755603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.755620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.755968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.756396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.756413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.756742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.757080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.757097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.757388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.757804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.757843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.758321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.758695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.758733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.758982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.759380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.759419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.759846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.760263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.760303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.760784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.761109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.761147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.761638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.762021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.762060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.762468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.762913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.762952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.763305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.763764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.763803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.764198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.764497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.764536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.764921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.765260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.765300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.765691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.766037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.766076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.766490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.766981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.767021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.767359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.767826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.767865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.768265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.768711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.768750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.768999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.769342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.769382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.769801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.770210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.770249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.770565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.770894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.770934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.771352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.771747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.771786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.772244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.772574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.772613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.773031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.773427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.773467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.773938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.774279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.774319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.774981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.775019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.775361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.775748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.775787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.776692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.776730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.777135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.777540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.777579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.777937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.778398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.778415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.778760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.779052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.779069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.779421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.779767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.779784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.780138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.780538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.780578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.780974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.781352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.781391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.781872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.782255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.782295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.782705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.783175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.783195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.783355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.783711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.783750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.784208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.784406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.784445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.784825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.786277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.786306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.786671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.788240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.788267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.788590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.790183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.790224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.790603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.790942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.790959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.791414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.791866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.791906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.792251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.792609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.792648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.793049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.793440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.793480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.793938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.794279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.794318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.429 [2024-05-15 00:08:23.794721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.795168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.429 [2024-05-15 00:08:23.795220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.429 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.795623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.796013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.796031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.796329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.796661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.796677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.796972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.797254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.797271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.797699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.797973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.797989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.798369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.798508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.798525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.798864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.799211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.799228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.799522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.799858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.799875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.800222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.800631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.800648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.800936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.801329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.801345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.801703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.802054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.802070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.802494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.802835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.802851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.803204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.803613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.803630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.803986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.804348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.804364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.804797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.805219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.805236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.805594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.805935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.805951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.806305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.806708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.806724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.807097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.807454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.807471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.807825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.808163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.808179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.808520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.808858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.808874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.809230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.809433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.809449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.809821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.810155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.810171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.810547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.810907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.810923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.811259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.811681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.811698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.812034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.812396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.812412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.812688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.813101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.813117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.813378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.813633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.813649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.813924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.814323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.814340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.814767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.815211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.815228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.815666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.815924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.815940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.816308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.816645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.816661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.816971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.817337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.817354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.817786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.818203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.818219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.818492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.818845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.818861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.819275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.819544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.819560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.819924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.820347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.820363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.820722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.821120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.821136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.821561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.821915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.821931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.822336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.822619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.822636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.822999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.823424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.823441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.823845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.824207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.824223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.824437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.824888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.824905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.825242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.825574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.825590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.825996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.826279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.826296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.826730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.827101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.827118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.827543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.827904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.827920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.828211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.828632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.828648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.828991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.829426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.829442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.829792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.830133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.830153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.830620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.830969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.830986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.831340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.831467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.831484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.831912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.832337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.832353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.832549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.832995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.833011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.833372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.833806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.833823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.834111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.834513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.834530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.834928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.835263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.835282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.430 qpair failed and we were unable to recover it. 00:26:23.430 [2024-05-15 00:08:23.835685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-05-15 00:08:23.836036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.836052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.836330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.836663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.836679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.837086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.837509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.837525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.837975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.838405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.838421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.838777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.839172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.839188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.839500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.839833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.839849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.840143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.840354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.840370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.840775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.841060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.841076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.841425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.841848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.841864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.842135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.842537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.842555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.842905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.843095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.843111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.843389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.843813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.843830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.844022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.844468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.844485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.844877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.845085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.845101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.845430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.845792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.845808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.846143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.846573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.846591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.846865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.847304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.847320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.847658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.848076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.848092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.848440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.848846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.848862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.849204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.849487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.849506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.849940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.850360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.850376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.850784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.851183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.851204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.851627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.851900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.851916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.852322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.852551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.852568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.852777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.853105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.853121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.853526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.853863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.853885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.854118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.854467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.854484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.854845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.855247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.855263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.855691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.856019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.856035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.856462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.856833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.856852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.857220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.857576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.857594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.857933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.858301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.858317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.858670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.859069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.859086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.859425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.859778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.859794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.860041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.860367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.860384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.860716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.861049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.861066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.861492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.861846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.861862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.862264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.862540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.862556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.862922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.863325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.863342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.863638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.864073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.864089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.864399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.864825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.864842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.865230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.865652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.865669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.866027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.866376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.866392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.866741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.867032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.867048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.867397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.867846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.867863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.431 qpair failed and we were unable to recover it. 00:26:23.431 [2024-05-15 00:08:23.868319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.431 [2024-05-15 00:08:23.868734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.868750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.869101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.869445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.869462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.869819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.870185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.870214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.870665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.871013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.871029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.871364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.871652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.871668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.871813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.872111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.872127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.872483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.872760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.872776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.873058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.873477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.873693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.874096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.874112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.874473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.874815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.874831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.875196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.875552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.875568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.875923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.876205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.876222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.876578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.876925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.876941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.877281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.877617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.877633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.877914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.878189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.878213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.878620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.878973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.878989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.879214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.879588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.879604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.879960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.880309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.880325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.880776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.881241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.881258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.881675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.882074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.882090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.882449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.882791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.882808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.883212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.883574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.883590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.883993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.884391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.884678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.884955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.884971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.885321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.885711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.885727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.886075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.886487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.886504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.886838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.887240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.887257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.887609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.887937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.887953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.888256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.888394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.888410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.888701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.888990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.889006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.889156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.889529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.889546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.889846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.890218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.890235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.890513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.890796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.890812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.891094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.891251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.891268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.891694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.891985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.892001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.892364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.892694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.892710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.892986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.893403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.893419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.893761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.894187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.894208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.894650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.894998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.895014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.895430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.895830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.895846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.896004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.896345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.896362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.896781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.897184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.897206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.897580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.897843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.897859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.898201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.898608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.898625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.898792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.899088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.899104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.899266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.899527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.899543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.899818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.900216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.900233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.900588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.900929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.900946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.901281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.901565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.901581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.901922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.902287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.902303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.902653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.902921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.902938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.903353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.903689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.903705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.903982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.904409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.904425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.904556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.904900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.904916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.905255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.905597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.905613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.432 qpair failed and we were unable to recover it. 00:26:23.432 [2024-05-15 00:08:23.905908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.432 [2024-05-15 00:08:23.906218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.906234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.906534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.906685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.906701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.907034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.907438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.907454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.907733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.908160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.908177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.908601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.909022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.909038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.909441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.909759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.910043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.910170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.910186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.910609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.910962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.910978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.911327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.911684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.911700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.911981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.912264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.912280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.912555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.913002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.913018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.913314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.913679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.913694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.913978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.914407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.914424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.914829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.915216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.915233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.915581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.915713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.915729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.916026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.916380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.916397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.916676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.916956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.916973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.917273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.917609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.917625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.917985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.918325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.918341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.918691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.919129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.919145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.919503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.919787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.919803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.920163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.920379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.920396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.920698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.921031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.921048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.921398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.921743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.921759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.922207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.922471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.922488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.922823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.923152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.923169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.923367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.923769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.923785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.924173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.924507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.924523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.924799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.925083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.925099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.925506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.925906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.925922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.926296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.926594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.926610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.927019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.927451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.927492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.927879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.928292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.928332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.928683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.929064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.929103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.929553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.929997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.930036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.930492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.930979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.931017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.931418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.931753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.931792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.932216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.932686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.932725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.933132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.933623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.933662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.934007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.934454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.934494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.934959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.935358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.935397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.935813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.936290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.936330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.936735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.937152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.937200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.937584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.937890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.937928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.938329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.938759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.938797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.939261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.939659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.939698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.433 [2024-05-15 00:08:23.940047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.940301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.433 [2024-05-15 00:08:23.940340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.433 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.940726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.941236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.941276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.941681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.942007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.942046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.942430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.942766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.942805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.945916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.946632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.946656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.947046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.947402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.947419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.947772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.948134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.948150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.948588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.948879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.948896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.949228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.949545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.949562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.949883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.950243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.950259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.950630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.951044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.951083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.951576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.951921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.951960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.952384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.952853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.952892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.953238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.953616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.953656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.954113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.954452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.954471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.954820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.955113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.955153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.955498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.955946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.955985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.956322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.956736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.956775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.957210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.957598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.957637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.957971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.958362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.958401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.958793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.959122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.959161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.959623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.960020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.960059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.960393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.960785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.960824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.961165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.961489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.961506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.961786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.962189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.962211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.962486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.962773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.962811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.963289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.963719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.963757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.964216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.964541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.964581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.964970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.965304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.965320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.965604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.965877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.965893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.966269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.966668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.966708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.966907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.967296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.967314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.967726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.968105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.968144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.968500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.968816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.968832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.969162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.969514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.969560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.970019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.970419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.970459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.970747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.971018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.971034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.971378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.971718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.971757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.972102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.972421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.972438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.972796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.973066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.973105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.973424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.973794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.973832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.974298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.974763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.974809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.975088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.975387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.975404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.975764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.976064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.976102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.976601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.977054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.977098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.977440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.977865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.977904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.978364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.978818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.978857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.979289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.979706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.979745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.980135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.980520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.980536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.980888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.981274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.981312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.981741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.982153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.982201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.982544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.982863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.982880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.983226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.983506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.983545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.983869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.984186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.984235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.984561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.984734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.984772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.985166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.985680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.985719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.986179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.986590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.986630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.434 [2024-05-15 00:08:23.987055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.987521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.434 [2024-05-15 00:08:23.987561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.434 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.987965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.988346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.988385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.988812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.989132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.989171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.989571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.989966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.990005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.990368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.990800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.990839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.991256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.991650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.991689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.992087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.992536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.992583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.992879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.993234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.993251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.993665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.994116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.994155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.994518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.994830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.994846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.995271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.995649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.995687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.996032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.996502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.996543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.996887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.997239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.997279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.997759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.998151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.998190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.998537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.998937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.998976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:23.999323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.999719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:23.999758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.000265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.000710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.000749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.001145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.001545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.001586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.001976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.002369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.002408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.002837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.003287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.003327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.003813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.004262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.004301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.004503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.004835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.004874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.005281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.005750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.005789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.006179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.006578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.006617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.006894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.007244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.007285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.007688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.008080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.008096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.008397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.008695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.008711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.009100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.009506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.009545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.009952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.010371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.010387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.010547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.010819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.010835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.011180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.011662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.435 [2024-05-15 00:08:24.011701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.435 qpair failed and we were unable to recover it. 00:26:23.435 [2024-05-15 00:08:24.012190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-05-15 00:08:24.012592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-05-15 00:08:24.012608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.700 [2024-05-15 00:08:24.012888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-05-15 00:08:24.013291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.700 [2024-05-15 00:08:24.013307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.700 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.013581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.013984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.014000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.014428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.014778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.014817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.015225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.015675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.015714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.016212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.016616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.016654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.017079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.017539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.017579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.017982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.018450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.018489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.018810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.019187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.019207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.019539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.019947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.019986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.020337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.020662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.020678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.021052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.021292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.021331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.021658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.021984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.022001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.022418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.022749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.022788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.023231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.023578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.023617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.024107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.024514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.024554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.024941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.025290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.025331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.025763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.026236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.026276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.026676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.027087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.027137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.027415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.027871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.027910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.028333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.028747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.028763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.029055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.029352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.029369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.029709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.030133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.030149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.030508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.030938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.030955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.031317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.031689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.031727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.032188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.032554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.032593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.033048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.033359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.033399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.033719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.034123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.034162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.034676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.035014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.035054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.035460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.035937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.035954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.036328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.036734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.036774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.701 qpair failed and we were unable to recover it. 00:26:23.701 [2024-05-15 00:08:24.037179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.701 [2024-05-15 00:08:24.037635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.037685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.038027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.038441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.038462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.038770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.039185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.039241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.039659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.040132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.040172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.040640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.041028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.041067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.041465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.041818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.041858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.042222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.042617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.042656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.043077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.043420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.043460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.043648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.043960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.043999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.044342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.044747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.044786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.045184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.045574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.045590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.046021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.046471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.046512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.046871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.047213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.047253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.047519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.047989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.048027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.048454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.048858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.048897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.049323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.049704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.049743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.050211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.050685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.050725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.051023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.051365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.051381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.051802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.052208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.052248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.052645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.053139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.053177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.053683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.054073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.054112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.054509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.054901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.054918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.055279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.055687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.055726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.056132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.056541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.056580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.057063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.057537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.057576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.057924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.058326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.058365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.058716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.059109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.059148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.059571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.059953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.059970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.060319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.060654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.060692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.702 [2024-05-15 00:08:24.061127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.061520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.702 [2024-05-15 00:08:24.061573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.702 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.061940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.062312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.062352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.062853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.063345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.063385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.063862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.064353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.064393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.064641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.065045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.065083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.065504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.065921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.065960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.066440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.066763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.066802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.067252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.067731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.067771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.068176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.068543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.068582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.069039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.069451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.069494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.069842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.070123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.070139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.070522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.070875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.070914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.071260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.071657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.071697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.072102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.072508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.072547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.072889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.073310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.073326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.073753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.074023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.074039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.074335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.074638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.074677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.074928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.075330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.075370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.075713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.076134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.076174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.076590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.076978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.077016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.077402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.077736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.077775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.078113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.078515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.078532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.078899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.079315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.079356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.079783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.080256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.080272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.080634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.081110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.081148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.081650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.082051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.082090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.082518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.082964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.083003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.083356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.083786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.083831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.084277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.084724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.084764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.085149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.085562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.085602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.703 qpair failed and we were unable to recover it. 00:26:23.703 [2024-05-15 00:08:24.086017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.703 [2024-05-15 00:08:24.086484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.086524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.086957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.087409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.087448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.087873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.088343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.088383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.088742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.089205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.089246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.089706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.090180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.090233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.090658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.091018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.091034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.091441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.091754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.091794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.092214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.092565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.092610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.093022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.093426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.093466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.093838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.094294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.094334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.094792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.095185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.095236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.095627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.096046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.096086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.096565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.097021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.097060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.097585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.098036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.098075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.098513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.098865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.098904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.099344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.099820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.099859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.100393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.100791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.100830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.101250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.101650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.101694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.102181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.102590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.102629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.103049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.103508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.103547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.103959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.104341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.104381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.104750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.104968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.105007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.105542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.105977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.106016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.106424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.106904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.106920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.107226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.107591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.107608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.107919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.108413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.108453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.108822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.109151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.109200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.109603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.109938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.109982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.110411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.110904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.110943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.111387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.111782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.704 [2024-05-15 00:08:24.111821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.704 qpair failed and we were unable to recover it. 00:26:23.704 [2024-05-15 00:08:24.112268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.112685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.112724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.113231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.113684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.113723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.114239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.114734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.114772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.115269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.115688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.115727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.116114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.116481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.116498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.116849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.117296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.117312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.117628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.118067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.118083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.118501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.118923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.118939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.119349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.119653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.119669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.120024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.120449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.120466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.120818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.121264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.121281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.121706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.122165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.122181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.122545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.122872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.122888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.123294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.123711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.123727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.124036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.124403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.124420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.124806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.125178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.125199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.125633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.125939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.125956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.126363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.126735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.126752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.127161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.127611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.127628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.128067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.128409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.128425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.128832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.129233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.129249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.129605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.129899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.129915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.130319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.130723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.130739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.131109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.131513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.131529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.705 [2024-05-15 00:08:24.131953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.132299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.705 [2024-05-15 00:08:24.132315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.705 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.132720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.133169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.133185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.133672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.134064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.134080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.134518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.134907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.135256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.135548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.135564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.136003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.136357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.136373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.136744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.137096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.137112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.137517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.137825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.137842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.138216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.138651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.138668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.139019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.139368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.139385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.139786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.140132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.140148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.140557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.140890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.141251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.141637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.141654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.142075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.142486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.142502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.142839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.143239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.143256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.143563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.143904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.143920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.144344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.144722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.144739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.145099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.145503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.145519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.145874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.146205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.146221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.146653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.147095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.147112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.147495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.147896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.147912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.148276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.148656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.148672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.149032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.149490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.149507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.149811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.150161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.150177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.150589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.151003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.151019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.151428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.151784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.151800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.152270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.152648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.152664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.152949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.153376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.153393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.153788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.154201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.154217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.154529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.154887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.154903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.706 qpair failed and we were unable to recover it. 00:26:23.706 [2024-05-15 00:08:24.155256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.706 [2024-05-15 00:08:24.155609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.155625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.155982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.156416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.156433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.156786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.157201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.157217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.157654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.158022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.158038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.158403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.158735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.158751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.159105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.159532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.159548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.159926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.160279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.160296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.160694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.161049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.161066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.161503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.161873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.161889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.162314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.162741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.162757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.163203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.163567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.163583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.163892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.164248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.164265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.164693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.165135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.165152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.165562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.165965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.165981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.166400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.166824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.166840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.167265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.167600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.167617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.167979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.168400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.168417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.168777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.169174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.169195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.169553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.169909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.169925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.170262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.170696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.170735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.171237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.171710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.171748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.172266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.172697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.172736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.173225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.173667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.173706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.174222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.174646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.174685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.175223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.175620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.175659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.176150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.176560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.176600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.177099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.177498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.177538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.177958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.178445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.178485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.179418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.179458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.179923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.180373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.180413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.707 qpair failed and we were unable to recover it. 00:26:23.707 [2024-05-15 00:08:24.180867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.707 [2024-05-15 00:08:24.181301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.181341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.181838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.182312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.182352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.182701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.183182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.183231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.183669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.184138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.184177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.184636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.185154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.185204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.185644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.186077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.186116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.186613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.187174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.187227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.187588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.188064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.188080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.188479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.188900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.188939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.189418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.189820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.189836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.190210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.190621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.190660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.191159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.191565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.191604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.192010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.192433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.192473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.192958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.193426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.193466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.193824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.194294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.194335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.194724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.195131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.195170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.195591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.196009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.196048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.196469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.196808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.196847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.197312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.197652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.197691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.198188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.198497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.198514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.198926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.199380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.199436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.199875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.200269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.200286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.200654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.201080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.201119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.201574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.201980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.202020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.202541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.202900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.202940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.203342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.203769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.203809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.204288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.204692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.204732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.205213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.205614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.205654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.206170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.206526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.206566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.708 [2024-05-15 00:08:24.207004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.207487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.708 [2024-05-15 00:08:24.207528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.708 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.207917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.208365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.208405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.208907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.209385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.209425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.209921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.210411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.210428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.210719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.211176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.211226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.211697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.212213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.212233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.212621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.213071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.213110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.213640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.214073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.214112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.214541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.214992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.215032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.215485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.215847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.215887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.216412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.216868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.216907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.217287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.217653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.217692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.218127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.218574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.218613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.219087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.219499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.219538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.220029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.220530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.220570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.221088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.221499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.221545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.221960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.222431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.222448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.222822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.223174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.223233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.223648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.223991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.224030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.224443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.224874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.224913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.225400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.225875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.225913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.226395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.226852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.226892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.227250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.227711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.227751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.228254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.228710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.228749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.229158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.229634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.229673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.230170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.230579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.230624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.709 qpair failed and we were unable to recover it. 00:26:23.709 [2024-05-15 00:08:24.231153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.709 [2024-05-15 00:08:24.231583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.231624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.232033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.232479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.232496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.232929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.233277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.233293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.233651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.234105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.234144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.234532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.234940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.234978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.235412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.235809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.235848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.236257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.236752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.236791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.237271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.237685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.237724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.238223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.238649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.238687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.239097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.239511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.239558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.239922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.240384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.240424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.240887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.241359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.241399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.241905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.242384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.242423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.242841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.243243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.243283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.243754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.244182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.244207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.244630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.245104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.245143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.245583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.246046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.246086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.246585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.247099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.247138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.247564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.248051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.248094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.248522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.248934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.248951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.249301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.249663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.249679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.250039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.250484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.250524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.251032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.251512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.251553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.251973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.252382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.252398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.252704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.253120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.253137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.253505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.253816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.253833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.254253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.254732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.254771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.255309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.255764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.255803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.256282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.256645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.256684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.257188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.257652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.710 [2024-05-15 00:08:24.257692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.710 qpair failed and we were unable to recover it. 00:26:23.710 [2024-05-15 00:08:24.258220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.258576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.258615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.259032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.259509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.259549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.259984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.260458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.260499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.260924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.261330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.261385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.261830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.262335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.262375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.262792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.263312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.263366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.263798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.264276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.264314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.264701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.265090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.265130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.265637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.266168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.266221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.266695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.267229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.267246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.267632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.268151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.268189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.268600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.268981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.269019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.269495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.269976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.270014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.270509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.270945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.270983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.271419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.271802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.271842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.272257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.272622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.272661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.273211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.273743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.273783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.274263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.274704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.274743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.275267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.275621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.275661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.276092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.276551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.276592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.277097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.277491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.277532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.278066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.278610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.278649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.279083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.279587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.279628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.280009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.280469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.280509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.280875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.281346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.281363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.281828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.282323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.282340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.282705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.283082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.283099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.283530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.283901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.283918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.284362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.284673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.284690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.711 qpair failed and we were unable to recover it. 00:26:23.711 [2024-05-15 00:08:24.285140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.285627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.711 [2024-05-15 00:08:24.285667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.712 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.286125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.286619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.286635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.287086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.287509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.287527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.287934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.288376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.288393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.288837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.289282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.289321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.289743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.290088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.290126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.290577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.291040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.291078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.291423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.291819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.291859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.292341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.292713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.292752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.293259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.293717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.293757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.294233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.294681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.294721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.295178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.295585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.295624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.296045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.296468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.296485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.296915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.297337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.297378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.297873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.298377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.298394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.298859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.299293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.299333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.299759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.300167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.300220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.976 [2024-05-15 00:08:24.300704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.301222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.976 [2024-05-15 00:08:24.301262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.976 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.301776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.302289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.302329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.302778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.303263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.303303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.304161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.304209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.304736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.305187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.305209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.305630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.306073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.306090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.306538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.306904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.306944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.307362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.307759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.307798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.308301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.308805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.308845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.309377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.309780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.309819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.310311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.310822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.310862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.311380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.311773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.311812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.312250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.312737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.312777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.313205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.313691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.313731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.314256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.314667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.314706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.315208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.315660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.315700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.316220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.316634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.316674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.317167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.317687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.317727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.318238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.318747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.318787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.319299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.319705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.319745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.320237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.320709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.320747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.321221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.321684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.321724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.322225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.322697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.323202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.323694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.323734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.324262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.324761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.324801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.325321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.325830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.325869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.326369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.326855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.326894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.977 [2024-05-15 00:08:24.327427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.327907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.977 [2024-05-15 00:08:24.327924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.977 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.328364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.328805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.328845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.329349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.329756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.329795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.330294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.330777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.330816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.331343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.331774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.331814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.332279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.332722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.332762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.333287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.333751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.333790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.334258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.334750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.334790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.335309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.335716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.335756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.336251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.336720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.336759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.337285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.337795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.337835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.338276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.338741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.338780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.339219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.339703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.339743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.340275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.340709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.340748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.341186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.341605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.341644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.342090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.342599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.342640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.343052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.343525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.343565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.344055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.344446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.344486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.344962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.345446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.345486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.345967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.346393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.346434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.346876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.347356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.347396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.348342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.348382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.348886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.349289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.349306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.349753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.350234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.350276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.350716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.351224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.351264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.351780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.352288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.352328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.352741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.353205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.353246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.353700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.354111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.354150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.978 [2024-05-15 00:08:24.354645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.355053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.978 [2024-05-15 00:08:24.355093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.978 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.355501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.355915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.355932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.356350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.356794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.356833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.357358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.357789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.357828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.358292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.358775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.358815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.359247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.359734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.359774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.360305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.360813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.360852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.361372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.361836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.361875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.362404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.362759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.362798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.363232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.363723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.363770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.364246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.364692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.364733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.365260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.365759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.365799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.366247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.366638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.366655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.367080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.367474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.367491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.367856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.368214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.368231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.368677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.369094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.369111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.369551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.369993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.370010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.370433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.370870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.370887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.371268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.371684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.371701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.372145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.372504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.372525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.372964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.373398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.373416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.373798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.374181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.374204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.374648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.375087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.375103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.375551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.375913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.375930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.376371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.376811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.376827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.377206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.377567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.377584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.377943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.378382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.378399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.378816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.379181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.379209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.379647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.379986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.979 [2024-05-15 00:08:24.380003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.979 qpair failed and we were unable to recover it. 00:26:23.979 [2024-05-15 00:08:24.380438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.380855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.380874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.381292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.381651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.381668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.382091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.382531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.382548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.382939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.383363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.383380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.383817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.384202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.384219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.384651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.385090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.385107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.385466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.385907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.385923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.386368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.386780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.386796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.387182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.387615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.387632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.388075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.388515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.388532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.388971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.389414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.389434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.389866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.390346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.390363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.390808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.391248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.391264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.391708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.392147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.392164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.392529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.392969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.392986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.393360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.393802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.393819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.394261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.394699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.394716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.395135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.395510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.395527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.395945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.396381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.396398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.396838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.397278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.397296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.397672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.398111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.398128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.398582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.398995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.399012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.399372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.399704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.399720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.400114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.400418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.400435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.400777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.401243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.401260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.401734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.402214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.402231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.402607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.403049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.403065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.980 qpair failed and we were unable to recover it. 00:26:23.980 [2024-05-15 00:08:24.403509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.980 [2024-05-15 00:08:24.403865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.403882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.404329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.404766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.404783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.405223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.405661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.405678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.405988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.406428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.406445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.406824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.407256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.407273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.407636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.408071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.408088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.408529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.408899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.408916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.409334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.409773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.409790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.410231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.410672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.410689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.411065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.411498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.411515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.411954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.412343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.412360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.412786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.413204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.413222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.413663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.414083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.414100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.414540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.414916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.414933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.415373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.415766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.415783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.416207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.416587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.416626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.417047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.417531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.417571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.418021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.418455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.418495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.418994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.419414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.419454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.419927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.420416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.420456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.420983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.421489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.421529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.422039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.422547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.422587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.423101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.423509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.423549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.424047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.424450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.424501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.981 qpair failed and we were unable to recover it. 00:26:23.981 [2024-05-15 00:08:24.424944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.981 [2024-05-15 00:08:24.425392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.425432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.425951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.426460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.426508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.426990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.427433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.427475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.428016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.428503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.428543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.429046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.429509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.429549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.430009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.430492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.430532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.431052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.431416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.431433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.431858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.432386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.432435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.432908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.433346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.433363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.433812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.434323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.434363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.434878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.435338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.435378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.435865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.436352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.436392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.436892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.437277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.437317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.437811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.438321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.438360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.438860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.439349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.439389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.439912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.440375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.440415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.440886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.441293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.441334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.441826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.442337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.442385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.442891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.443351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.443391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.443907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.444312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.444358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.444745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.445230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.445270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.445793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.446305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.446346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.446862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.447377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.447417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.447855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.448321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.448361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.448889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.449318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.449358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.449854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.450346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.450387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.450814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.451299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.451339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.451863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.452371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.452410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.982 qpair failed and we were unable to recover it. 00:26:23.982 [2024-05-15 00:08:24.452925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.982 [2024-05-15 00:08:24.453426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.453467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.453855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.454338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.454377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.454879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.455293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.455333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.455825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.456332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.456372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.456892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.457399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.457439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.457959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.458463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.458503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.459020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.459529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.459569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.460079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.460585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.460625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.461142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.461672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.461713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.462215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.462554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.462571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.463026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.463439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.463479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.463953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.464439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.464477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.464862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.465344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.465391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.465860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.466348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.466388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.466918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.467425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.467465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.467903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.468306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.468346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.468820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.469243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.469283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.469820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.470284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.470324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.470820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.471302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.471342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.471847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.472329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.472369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.472841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.473348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.473388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.473896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.474381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.474421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.474948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.475363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.475402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.475811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.476283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.476300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.476750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.477212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.477252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.477757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.478164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.478216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.478635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.479107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.479147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.479686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.480148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.480188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.480713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.481204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.983 [2024-05-15 00:08:24.481244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.983 qpair failed and we were unable to recover it. 00:26:23.983 [2024-05-15 00:08:24.481661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.482144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.482183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.482708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.483117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.483156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.483657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.484167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.484220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.484583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.485031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.485071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.485590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.486021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.486060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.486537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.487023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.487063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.487586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.487971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.488010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.488524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.488933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.488973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.489376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.489864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.489903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.490435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.490929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.490968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.491461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.491947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.491986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.492488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.492897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.492936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.493435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.493885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.493923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.494421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.494828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.494868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.495360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.495868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.495907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.496386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.496868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.496907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.497439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.497866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.497905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.498385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.498865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.498903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.499440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.499889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.499928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.500403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.500926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.500965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.501477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.501942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.501959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.502269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.502574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.502616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.503067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.503555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.503595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.504118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.504624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.504664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.505059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.505520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.505561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.505919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.506351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.506390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.506891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.507397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.507436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.507952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.508365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.508405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-05-15 00:08:24.508893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.509402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.984 [2024-05-15 00:08:24.509441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.509862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.510275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.510315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.510723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.511213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.511253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.511729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.512207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.512247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.512765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.513230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.513270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.513767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.514226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.514272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.514692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.515122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.515162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.515591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.516017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.516057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.516539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.516988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.517027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.517450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.517858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.517897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.518391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.518855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.518894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.519392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.519897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.519936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.520478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.520940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.520979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.521479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.521941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.521980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.522403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.522886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.522925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.523449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.523897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.523942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.524461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.524930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.524969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.525498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.525906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.525945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.526368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.526853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.526892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.527416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.527913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.527952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.528480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.528989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.529029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.529545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.530055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.530095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.531117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.531156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.531693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.532115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.532154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.532615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.533055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.533094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.533565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.534053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.985 [2024-05-15 00:08:24.534099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-05-15 00:08:24.534620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.535081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.535121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.535622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.536084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.536124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.536627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.537045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.537084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.537567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.538055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.538095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.538624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.539133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.539171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.539690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.540152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.540212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.540720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.541215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.541255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.541728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.542216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.542256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.542683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.543162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.543210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.543735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.544141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.544186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.544616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.545099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.545139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.545669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.546152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.546202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.546728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.547233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.547274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.547810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.548303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.548343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.548836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.549249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.549290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.549785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.550293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.550333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.550849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.551347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.551387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.551889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.552320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.552360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.552855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.553284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.553324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.553800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.554210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.554250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.554726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.555215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.555255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.555806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.556272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.556313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.556813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.557326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.557366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.557874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.558228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.558244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.558714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.559167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.559184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.559632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.560070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.560087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.560463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.560809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.560848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-05-15 00:08:24.561257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.561740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.986 [2024-05-15 00:08:24.561780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:23.986 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.562299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.562742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.562759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.563247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.563610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.563626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.564076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.564540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.564581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.565002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.565478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.565495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.565940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.566382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.566422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.566923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.567406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.567445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.567971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.568407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.568447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.568932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.569394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.251 [2024-05-15 00:08:24.569434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.251 qpair failed and we were unable to recover it. 00:26:24.251 [2024-05-15 00:08:24.569848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.570311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.570328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.570754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.571263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.571302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.571741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.572233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.572274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.572770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.573251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.573291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.573831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.574295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.574335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.574832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.575318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.575358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.575876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.576380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.576420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.576935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.577398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.577438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.577879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.578313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.578353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.578769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.579259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.579299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.579798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.580254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.580271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.580714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.581141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.581157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.581643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.582093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.582133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.582614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.583101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.583139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.583680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.584111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.584151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.584688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.585177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.585229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.585671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.586187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.586238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.586773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.587255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.587295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.587794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.588276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.588317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.588869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.589283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.589324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.589794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.590279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.590318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.590838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.591343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.591383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.591823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.592262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.592302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.592713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.593205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.593244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.593799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.594262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.594302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.594800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.595310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.595350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.595848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.596322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.596363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.596913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.597400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.597440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.597938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.598409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.598426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.598880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.599315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.599355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.599772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.600214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.600254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.600732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.601202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.601242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.601678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.602168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.602217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.602728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.603212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.603253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.603784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.604271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.604312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.604677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.605133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.605149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.605616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.606080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.606119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.606623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.607105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.607144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.607628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.608111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.608151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.608616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.609123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.609163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.609725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.610183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.610234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.610683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.611162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.611214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.611601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.612045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.612085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.612612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.613097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.613137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.613598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.614085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.614124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.614554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.615039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.615078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.615578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.616084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.616123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.616579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.617062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.617101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.617605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.618035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.618052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.618495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.618932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.618948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.619313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.619681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.619697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.252 qpair failed and we were unable to recover it. 00:26:24.252 [2024-05-15 00:08:24.620079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.252 [2024-05-15 00:08:24.620518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.620535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.620974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.621349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.621366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.621736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.622178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.622200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.622643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.623014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.623031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.623450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.623888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.623905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.624348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.624788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.624805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.625262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.625701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.625717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.626158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.626596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.626613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.627051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.627470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.627487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.627907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.628346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.628364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.628799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.629234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.629251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.629671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.630053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.630070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.630511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.630954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.630971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.631346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.631793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.631810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.632246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.632663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.632679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.633117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.633552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.633569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.633951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.634389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.634406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.634849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.635314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.635331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.635817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.636259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.636276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.636714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.637074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.637091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.637534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.637977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.637994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.638336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.638696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.638713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.639155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.639596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.639613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.640057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.640498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.640515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.640932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.641305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.641322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.641673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.642089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.642105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.642532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.642971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.642987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.643345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.643784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.643800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.644159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.644603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.644621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.645069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.645510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.645527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.645967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.646403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.646420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.646797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.647235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.647252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.647629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.648000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.648018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.648462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.648902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.648919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.649362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.649734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.649750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.650195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.650560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.650577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.650983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.651418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.651435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.651878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.652313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.652330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.652749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.653186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.653209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.653648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.654017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.654033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.654451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.654829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.654846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.655286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.655641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.655659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.656027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.656300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.656317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.656708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.657064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.657084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.657525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.657939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.657956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.658326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.658600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.658616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.659057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.659495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.659512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.659955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.660327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.660344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.660737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.661159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.661176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.661625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.662049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.662065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.662506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.662946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.662963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.663407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.663773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.663790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.664136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.664597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.664614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.665053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.665433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.665453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.665838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.666270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.666310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.666801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.667306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.667346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.667863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.668240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.668281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.253 qpair failed and we were unable to recover it. 00:26:24.253 [2024-05-15 00:08:24.668777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.253 [2024-05-15 00:08:24.669283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.669323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.669816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.670259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.670300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.670807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.671265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.671304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.671821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.672306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.672346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.672878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.673271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.673288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.673715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.674118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.674157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.674665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.675174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.675231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.675673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.676180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.676230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.676729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.677078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.677117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.677632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.678117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.678156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.678685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.679215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.679255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.679662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.680105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.680144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.680684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.681119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.681158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.681653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.682134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.682186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.682565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.683046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.683084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.683568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.684076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.684115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.684587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.685073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.685118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.685567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.686022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.686076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.686502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.686986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.687026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.687446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.687955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.687995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.688508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.688919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.688958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.689449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.689861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.689899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.690394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.690901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.690940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.691460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.691963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.692002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.692484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.692968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.693007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.693370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.693728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.693767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.694262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.694669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.694707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.695154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.695563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.695603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.696108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.696615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.696655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.697146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.697638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.697677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.698212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.698718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.698758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.699277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.699686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.699726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.700238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.700683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.700722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.701258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.701764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.701803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.702245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.702730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.702769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.703273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.703757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.703797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.704297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.704784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.704824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.705324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.705811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.705850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.706304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.706741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.706780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.707254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.707655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.707695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.708122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.708628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.708669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.709154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.709650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.709668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.710061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.710499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.710539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.711024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.711507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.711547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.712077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.712583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.712624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.712997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.713388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.713429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.713903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.714258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.714298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.714797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.715284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.715325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.715803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.716211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.716265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.716659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.717067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.717106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.717619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.717971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.718012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.254 qpair failed and we were unable to recover it. 00:26:24.254 [2024-05-15 00:08:24.718444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.718889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.254 [2024-05-15 00:08:24.718929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.719410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.719775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.719814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.720267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.720709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.721215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.721680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.721719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.722223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.722703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.722721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.723198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.723688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.723728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.724236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.724608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.724647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.725139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.725558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.725601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.726093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.726474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.726515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.726923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.727305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.727346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.727763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.728229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.728271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.728672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.729026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.729066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.729560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.729972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.730011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.730523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.730937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.730977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.731411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.731823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.731863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.732330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.732657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.732696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.733153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.733525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.733565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.733934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.734459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.734825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.735306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.735346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.735769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.736265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.736306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.736734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.737226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.737266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.737725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.738219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.738259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.738683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.739145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.739184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.739618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.740046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.740085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.740570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.741021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.741061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.741503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.741987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.742026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.742562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.742928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.742968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.743460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.743897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.743937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.744354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.744843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.744882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.745404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.745913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.745953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.746463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.746924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.746964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.747463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.747980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.748020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.748542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.748976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.749015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.749466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.749880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.749920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.750403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.750817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.750856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.751356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.751856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.751873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.752335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.752781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.752821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.753323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.753810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.753850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.754375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.754841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.754880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.755381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.755879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.755918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.756444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.756950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.756990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.757524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.758012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.758051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.758474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.758906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.758945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.759361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.759844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.759883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.760405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.760913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.760952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.761401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.761857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.761897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.762410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.762777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.762816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.763313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.763775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.763815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.764311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.764721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.764760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.765232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.765700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.765717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.766181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.766681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.766720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.767222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.767707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.767747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.768275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.768687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.768727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.769210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.769692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.769731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.770239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.770722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.770761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.771217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.771676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.771716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.772086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.772576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.772617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.255 [2024-05-15 00:08:24.773053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.773435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.255 [2024-05-15 00:08:24.773475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.255 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.773934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.774417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.774457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.774959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.775444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.775484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.776015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.776520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.776560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.777076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.777546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.777586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.778092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.778577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.778617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.779142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.779637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.779676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.780230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.780693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.780733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.781240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.781745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.781784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.782305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.782673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.782712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.783210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.783718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.783756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.784271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.784703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.784742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.785175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.785686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.785726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.786259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.786744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.786784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.787308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.787766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.787805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.788299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.788704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.788743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.789254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.789762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.789801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.790319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.790726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.790765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.791173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.791683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.791722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.792146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.792570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.792611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.793103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.793544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.794048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.794559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.794600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.795113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.795599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.795639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.796168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.796733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.796773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.797303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.797709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.797748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.798241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.798749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.798789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.799305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.799768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.799808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.800281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.800715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.800755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.801254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.801717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.801756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.802264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.802674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.803171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.803692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.803732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.804169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.804618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.804658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.805176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.805637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.805676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.806098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.806580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.806620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.807146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.807562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.807602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.808090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.808497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.808537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.809032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.809539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.809579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.810100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.810564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.810604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.811078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.811563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.811603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.811986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.812407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.812454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.812964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.813447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.813488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.813957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.814343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.814383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.814829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.815267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.815307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.815836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.816343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.816383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.816899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.817311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.817328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.817782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.818213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.818254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.818723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.819216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.819257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.819783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.820218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.820263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.820785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.821270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.821310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.821737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.822150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.822216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.822624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.823104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.823143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.823595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.824055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.824095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.824600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.825086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.825103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.256 qpair failed and we were unable to recover it. 00:26:24.256 [2024-05-15 00:08:24.825463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.256 [2024-05-15 00:08:24.825914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.825954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.826472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.826981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.827020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.827520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.828004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.828044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.828571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.829079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.829118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.829624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.830129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.830168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.830733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.831243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.831283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.831792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.832264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.832310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.832766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.833210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.833227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.833674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.834186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.834242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.834764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.835283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.835300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.835768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.836258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.257 [2024-05-15 00:08:24.836299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.257 qpair failed and we were unable to recover it. 00:26:24.257 [2024-05-15 00:08:24.836834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.837253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.837294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.837728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.838106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.838123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.838571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.838918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.838957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.839452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.839958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.839998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.840518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.840942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.840981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.841390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.841848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.841892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.842389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.842811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.842850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.843275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.843762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.843801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.844327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.844815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.844855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.845354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.845814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.845854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.846350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.846801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.846840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.847282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.847749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.847788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.522 qpair failed and we were unable to recover it. 00:26:24.522 [2024-05-15 00:08:24.848260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.522 [2024-05-15 00:08:24.848719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.848758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.849180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.849675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.849715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.850242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.850646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.850680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.851119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.851578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.851595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.852030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.852422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.852463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.852983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.853445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.853485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.853984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.854494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.854534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.855072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.855561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.855602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.856095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.856583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.856624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.857142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.857637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.857685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.858219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.858709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.858749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3731039 Killed "${NVMF_APP[@]}" "$@" 00:26:24.523 [2024-05-15 00:08:24.859281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.859715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.859733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:24.523 [2024-05-15 00:08:24.860178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:24.523 [2024-05-15 00:08:24.860610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.860628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:24.523 [2024-05-15 00:08:24.861012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.523 [2024-05-15 00:08:24.861441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.861459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.861898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.862256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.862274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.862695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.863131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.863148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.863569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.864005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.864021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.864373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.864822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.864839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.865283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.865722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.865738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.866133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.866490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.866509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.866965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.867406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.867423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.867867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.868308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.868326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.868677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.869140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.869157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.869547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3731865 00:26:24.523 [2024-05-15 00:08:24.869973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.869991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3731865 00:26:24.523 [2024-05-15 00:08:24.870436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3731865 ']' 00:26:24.523 [2024-05-15 00:08:24.870854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 [2024-05-15 00:08:24.870872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.871253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.523 [2024-05-15 00:08:24.871706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.523 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:24.523 [2024-05-15 00:08:24.871725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.523 qpair failed and we were unable to recover it. 00:26:24.523 [2024-05-15 00:08:24.872171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.524 [2024-05-15 00:08:24.872600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.872619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:24.524 00:08:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.524 [2024-05-15 00:08:24.873044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.873478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.873869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.874294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.874317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.874703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.875140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.875157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.875630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.876084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.876102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.876914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.876931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.877375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.877763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.877786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.878238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.878603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.878620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.879007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.879432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.879449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.879893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.880280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.880297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.880664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.881103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.881120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.881562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.882024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.882041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.882506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.882872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.882890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.883339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.883655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.883672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.884040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.884441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.884458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.884823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.885265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.885282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.885732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.886168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.886186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.886526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.886987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.887004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.887470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.887746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.887764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.888217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.888657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.888675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.889115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.889553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.889570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.889780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.890219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.890237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.890625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.891069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.891086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.891530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.891932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.891948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.892380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.892769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.892786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.893045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.893426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.893444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.893812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.894222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.894240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.894632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.894998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.895015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.524 qpair failed and we were unable to recover it. 00:26:24.524 [2024-05-15 00:08:24.895458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.524 [2024-05-15 00:08:24.895875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.895892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.896332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.896709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.896726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.897161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.897605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.897622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.898064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.898525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.898542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.899006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.899465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.899482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.900345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.900363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.900717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.901175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.901198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.901641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.902008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.902025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.902397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.902838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.902855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.903298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.903735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.903752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.904172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.904623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.904641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.905084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.905520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.905538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.905908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.906351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.906369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.906811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.907178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.907203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.907644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.908069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.908086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.908451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.908824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.908841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.909239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.909651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.909667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.909875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.910324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.910341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.910655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.911132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.911149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.911492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.911902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.911919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.912338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.912791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.912808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.913168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.913595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.913612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.914056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.914392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.914409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.914848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.915263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.915280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.915695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.916109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.916126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.916568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.916982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.916999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.917171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.917546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.917564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.917983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.918435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.918452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.918889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.919266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.919283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.919631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.920064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.525 [2024-05-15 00:08:24.920081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.525 qpair failed and we were unable to recover it. 00:26:24.525 [2024-05-15 00:08:24.920539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.920878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.920894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.921272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.921712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.921729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.922144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.922220] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:26:24.526 [2024-05-15 00:08:24.922284] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.526 [2024-05-15 00:08:24.922503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.922523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.922962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.923296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.923313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.923697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.924133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.924149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.924597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.924936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.924953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.925392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.925824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.925842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.926231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.926668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.926686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.927124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.927521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.927539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.927921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.928356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.928374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.928786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.929142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.929158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.929602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.929759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.929776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.930151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.930460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.930477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.930915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.931274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.931291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.931717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.932076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.932093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.932459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.932869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.932886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.933322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.933779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.933796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.934231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.934589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.934607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.935019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.935371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.935388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.935802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.936240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.936257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.936406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.936835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.936852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.937216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.937654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.937671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.938104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.938542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.938560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.938922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.939350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.939367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.939538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.939955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.939972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.526 qpair failed and we were unable to recover it. 00:26:24.526 [2024-05-15 00:08:24.940352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.940796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.526 [2024-05-15 00:08:24.940813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.941278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.941620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.941638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.942019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.942386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.942403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.942838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.943245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.943262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.943721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.944026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.944043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.944455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.944891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.944907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.945349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.945569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.945586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.946035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.946461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.946478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.946915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.947275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.947292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.947675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.948112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.948129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.948544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.948900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.948917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.949253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.949571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.949589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.950048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.950418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.950435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.950849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.951232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.951249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.951664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.952090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.952107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.952564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.952912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.952929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.953340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.953686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.953703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.954006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.954280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.954297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.954674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.955104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.955121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.955541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.955970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.955990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.956342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.956749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.956766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.957149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.957509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.957526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.957959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.958249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.958266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.958699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.959050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.959067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.959515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.959736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.959752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.960113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.960304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.960322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.960698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.961117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.961133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.527 [2024-05-15 00:08:24.961495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.961925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.961942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.527 qpair failed and we were unable to recover it. 00:26:24.527 [2024-05-15 00:08:24.962374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.527 [2024-05-15 00:08:24.962831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.962848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.963210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.963563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.963580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.964010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.964362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.964379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.964788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.965198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.965215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.965650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.966063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.966080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.966516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.966946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.966963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.967402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.967852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.967869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.968135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.968563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.968580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.969011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.969362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.969379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.969839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.970200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.970217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.970655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.971018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.971035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.971393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.971814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.971833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.972164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.972531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.972548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.972980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.973411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.973428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.973845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.974235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.974252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.974674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.974977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.974993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.975423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.975828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.975845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.976272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.976702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.976719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.977148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.977574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.977591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.978001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.978426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.978442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.978874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.979300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.979317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.979747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.980154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.980173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.980544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.980979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.980995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.981366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.981729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.981746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.982189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.982578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.982594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.982949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.983397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.983414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.983697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.984121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.984138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.984574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.985051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.985067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.985503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.985870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.986277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.986705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.986721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.528 qpair failed and we were unable to recover it. 00:26:24.528 [2024-05-15 00:08:24.987128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.528 [2024-05-15 00:08:24.987555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.987572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.988000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.988365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.988384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.988667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.989105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.989122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.989478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.989882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.989899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.990328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.990752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.990769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.991123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.991572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.991589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.991949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.992394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.992411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.992767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.993121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.993137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.993574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.993954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.993971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.994399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.994820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.994837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.995263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.995668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.995684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.995974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.996417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.996439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.996868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.997269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.997286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.997725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.998126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.998142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.998549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.998830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.998846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:24.999272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.999678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:24.999695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.000051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.000503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.000520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.000951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.001282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.001300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.001679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.002051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.002067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.002500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.002872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.002889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.003316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.003737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.003753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.004560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.004576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.004913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.005314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.005331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.005761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.006201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.006218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.006647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.007071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.007087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.007514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.007868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.007884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.008340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.008691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.008708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.009075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.009499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.009515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.009889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.010218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.010234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.529 [2024-05-15 00:08:25.010662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.011084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.529 [2024-05-15 00:08:25.011101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.529 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.011526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.011862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.011878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.012307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.012733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.012749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.013156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.013505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.013522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.013857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.014273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.014289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.014662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.015085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.015102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.015502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.015836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.015853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.016280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.016663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.530 [2024-05-15 00:08:25.016679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.016696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.017124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.017547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.017564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.017992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.018396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.018414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.018846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.019253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.019271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.019702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.020128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.020145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.020522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.020948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.020968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.021395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.021821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.021838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.022199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.022629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.022646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.023003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.023451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.023468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.023888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.024255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.024272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.024604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.025026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.025043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.025446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.025872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.025888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.026309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.026661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.026678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.027082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.027497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.027514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.027921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.028348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.028365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.028790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.029159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.029176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.029610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.030000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.030016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.030392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.030816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.030832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.031184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.031562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.031579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.032007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.032409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.032426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.033271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.033287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.033587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.033961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.033977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.034329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.034729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.034746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.530 qpair failed and we were unable to recover it. 00:26:24.530 [2024-05-15 00:08:25.035172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.530 [2024-05-15 00:08:25.035552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.035568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.035945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.036344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.036361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.036723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.037151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.037168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.037588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.037950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.038260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.038642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.038659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.039061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.039395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.039412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.039855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.040183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.040205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.040559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.040985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.041002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.041428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.041852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.041868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.042271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.042674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.042690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.043123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.043544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.043560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.043986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.044335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.044352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.044805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.045285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.045302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.045683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.045979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.045995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.046422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.046845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.046861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.047275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.047633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.047650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.048086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.048496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.048513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.048939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.049283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.049299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.049679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.050102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.050118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.050547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.050896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.050912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.051288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.051720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.051737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.052100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.052440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.052457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.052829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.053256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.053273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.053702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.054127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.054143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.054525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.054947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.054964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.055316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.055657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.531 [2024-05-15 00:08:25.055674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.531 qpair failed and we were unable to recover it. 00:26:24.531 [2024-05-15 00:08:25.056052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.056454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.056472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.056901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.057273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.057290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.057721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.058143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.058161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.058584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.058936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.058952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.059400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.059849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.059866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.060308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.060734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.060752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.061157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.061581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.061599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.062009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.062301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.062319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.062746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.063048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.063065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.063406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.063736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.063754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.064114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.064277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.064294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.064668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.065031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.065047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.065449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.065875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.065892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.066269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.066612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.066628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.066916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.067259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.067276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.067633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.068057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.068073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.068503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.068906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.068922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.069281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.069682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.069698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.070046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.070465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.070482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.070841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.071240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.071257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.071610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.072031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.072047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.072424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.072792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.072808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.073221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.073519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.073536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.073970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.074262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.074279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.074614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.074961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.074977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.075405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.075693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.075709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.076063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.076464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.076480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.076912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.077310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.077327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.077696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.078021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.078038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.532 [2024-05-15 00:08:25.078488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.078890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.532 [2024-05-15 00:08:25.078906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.532 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.079197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.079567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.079583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.079947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.080299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.080316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.080717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.081056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.081072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.081424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.081796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.081812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.082656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.082672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.083024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.083367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.083384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.083713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.084128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.084144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.084185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a39140 (9): Bad file descriptor 00:26:24.533 [2024-05-15 00:08:25.084730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.085155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.085174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.085612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.086037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.086054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.086409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.086791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.086807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.087213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.087504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.087522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.087951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.088377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.088394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.088691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.089115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.089132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.089557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.089855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.089872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.090324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.090740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.090760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.091002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.091429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.091447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.091555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.533 [2024-05-15 00:08:25.091584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.533 [2024-05-15 00:08:25.091594] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.533 [2024-05-15 00:08:25.091607] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.533 [2024-05-15 00:08:25.091614] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.533 [2024-05-15 00:08:25.091730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:24.533 [2024-05-15 00:08:25.091874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.091840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:24.533 [2024-05-15 00:08:25.091938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:24.533 [2024-05-15 00:08:25.091938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:24.533 [2024-05-15 00:08:25.092301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.092318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.092742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.093043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.093060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.093415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.093733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.093750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.094203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.094538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.094556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.094865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.095177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.095198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.095574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.095919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.095936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.096280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.096635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.096652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.096975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.097402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.097419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.097760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.098057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.098073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.098412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.098785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.098803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.533 [2024-05-15 00:08:25.099176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.099607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.533 [2024-05-15 00:08:25.099625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.533 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.100062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.100394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.100411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.100832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.101162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.101180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.101543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.101913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.101931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.102359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.102706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.102724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.103081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.103527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.103544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.103899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.104349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.104367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.104748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.105038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.105055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.105504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.105933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.105951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.534 [2024-05-15 00:08:25.106401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.106758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.534 [2024-05-15 00:08:25.106775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.534 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.107208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.107582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.107599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.108023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.108426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.108444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.108872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.109234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.109251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.109681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.110062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.110079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.110486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.110912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.110928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.111355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.111721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.111738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.112178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.112626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.112644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.112995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.113347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.113365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.113678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.114072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.114089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.114500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.114902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.114919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.115323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.115731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.115748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.116184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.116615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.116631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.117060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.117505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.117522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.117871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.118203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.118220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.118606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.118957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.118974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.119405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.119780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.805 [2024-05-15 00:08:25.119797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.805 qpair failed and we were unable to recover it. 00:26:24.805 [2024-05-15 00:08:25.120134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.120558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.120575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.120980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.121401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.121418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.121741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.122206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.122223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.122572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.122929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.122946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.123366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.123743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.123760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.124114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.124463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.124480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.124757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.125116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.125132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.125499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.125863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.125880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.126313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.126605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.126621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.126972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.127374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.127392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.127749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.128209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.128226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.128569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.129016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.129034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.129390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.129733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.129749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.130182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.130592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.130612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.131013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.131439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.131456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.131879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.132296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.132312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.132668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.133011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.133028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.133406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.133754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.133770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.134126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.134484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.134501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.134873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.135212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.135229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.135595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.135946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.135962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.136414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.136862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.136879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.137302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.137681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.137697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.138148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.138469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.138490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.138917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.139339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.139357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.139792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.140214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.140231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.140635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.141042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.141059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.141508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.141861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.141880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.142328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.142684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.142700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.806 qpair failed and we were unable to recover it. 00:26:24.806 [2024-05-15 00:08:25.143104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.143436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.806 [2024-05-15 00:08:25.143453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.143790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.144256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.144273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.144628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.145066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.145083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.145449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.145879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.145896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.146300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.146692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.146709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.147065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.147513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.147531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.147963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.148385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.148402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.148828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.149234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.149250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.149687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.150089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.150105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.150439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.150781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.150797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.151132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.151557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.151574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.151956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.152382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.152399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.152825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.153255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.153271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.153626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.154038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.154054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.154487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.154832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.154848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.155301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.155727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.155743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.156044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.156464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.156480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.156815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.157201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.157218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.157643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.158048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.158065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.158517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.158871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.158887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.159262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.159688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.159705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.160069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.160507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.160523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.160807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.161154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.161170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.161524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.161932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.161948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.162294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.162697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.162713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.163097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.163535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.163552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.163958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.164378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.164394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.164749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.165202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.165219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.165648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.166073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.166089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.166498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.166869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.166885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.807 qpair failed and we were unable to recover it. 00:26:24.807 [2024-05-15 00:08:25.167311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.807 [2024-05-15 00:08:25.167660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.167676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.168052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.168416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.168432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.168834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.169210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.169227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.169654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.170022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.170038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.170475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.170895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.170911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.171345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.171774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.171790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.172216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.172640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.172656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.173069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.173423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.173439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.173773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.174140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.174156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.174523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.174962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.174978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.175384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.175767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.175784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.176204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.176558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.176574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.176979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.177347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.177363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.177770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.178189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.178217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.178642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.178997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.179013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.179463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.179811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.179835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.180281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.180637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.180653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.181002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.181447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.181463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.181889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.182241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.182258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.182599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.182999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.183015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.183356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.183711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.183727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.184135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.184464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.184481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.184929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.185351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.185368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.185808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.186254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.186270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.186704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.187002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.187018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.187365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.187790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.187806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.188216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.188555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.188571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.188874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.189133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.189149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.189543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.189886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.189902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.190200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.190550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.808 [2024-05-15 00:08:25.190566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.808 qpair failed and we were unable to recover it. 00:26:24.808 [2024-05-15 00:08:25.190994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.191367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.191384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.191666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.192022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.192038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.192442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.192810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.192826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.193157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.193506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.193522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.193946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.194243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.194259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.194702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.195123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.195139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.195569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.195994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.196011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.196461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.196802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.196819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.197221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.197604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.197620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.198046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.198446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.198463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.198835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.199258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.199275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.199586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.200028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.200044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.200402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.200750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.200766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.201102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.201500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.201516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.201748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.202187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.202212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.202579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.202990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.203006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.203407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.203758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.203775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.204078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.204343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.204362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.204695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.205097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.205113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.205539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.205976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.205992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.206364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.206763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.206779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.207127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.207553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.207570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.207857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.208233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.208251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.208672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.208998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.209015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.809 [2024-05-15 00:08:25.209418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.209833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.809 [2024-05-15 00:08:25.209849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.809 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.210113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.210465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.210482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.210833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.211170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.211186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.211569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.211980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.211996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.212162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.212574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.212591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.213017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.213430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.213446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.213778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.214188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.214208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.214563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.214967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.214983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.215359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.215638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.215655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.216059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.216479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.216495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.216848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.217131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.217148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.217571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.217973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.217989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.218203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.218605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.218622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.218992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.219396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.219412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.219711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.220133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.220149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.220528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.220949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.220966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.221295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.221726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.221742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.222041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.222411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.222427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.222859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.223199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.223216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.223496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.223897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.223913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.224253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.224582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.224598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.225021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.225442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.225459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.225908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.226340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.226357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.226756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.227179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.227198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.227601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.227998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.228015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.228361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.228779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.228795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.229199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.229550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.229566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.229917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.230282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.230299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.230703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.231123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.231138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.231563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.231909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.810 [2024-05-15 00:08:25.231925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.810 qpair failed and we were unable to recover it. 00:26:24.810 [2024-05-15 00:08:25.232352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.232543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.232559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.232984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.233432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.233448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.233714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.234073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.234090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.234368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.234788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.234804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.235231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.235637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.235653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.236078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.236478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.236494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.236704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.237111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.237127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.237474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.237848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.237864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.238227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.238626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.238642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.238845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.239243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.239259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.239543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.239882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.239898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.240308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.240722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.240738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.241149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.241496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.241514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.241918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.242080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.242096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.242499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.242852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.242868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.243294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.243642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.243658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.244074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.244418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.244434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.244839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.245243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.245259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.245631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.246055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.246071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.246420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.246840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.246856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.247141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.247496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.247513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.248182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.248203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.248632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.248971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.248990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.249365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.249739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.249755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.250119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.250493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.250509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.250808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.251220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.251236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.251659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.251941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.251957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.252308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.252730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.252746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.253101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.253457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.253473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.811 [2024-05-15 00:08:25.253845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.254261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.811 [2024-05-15 00:08:25.254277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.811 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.254714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.255058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.255074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.255497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.255856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.255872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.256298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.256720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.256739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.257089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.257438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.257455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.257881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.258233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.258249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.258625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.259022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.259038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.259440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.259803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.259820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.260152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.260449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.260465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.260800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.261141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.261157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.261504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.261856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.261872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.262276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.262647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.262664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.263088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.263456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.263473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.263912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.264213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.264231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.264565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.264891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.264907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.265211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.265584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.265600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.266014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.266367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.266384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.266753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.267171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.267187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.267615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.267986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.268002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.268425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.268843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.268859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.269286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.269747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.269763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.270108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.270509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.270525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.270944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.271366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.271382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.271722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.272122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.272138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.272518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.272844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.272860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.273236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.273652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.273668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.274021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.274441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.274457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.274888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.275258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.275274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.275631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.275981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.275997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.812 qpair failed and we were unable to recover it. 00:26:24.812 [2024-05-15 00:08:25.276347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.276689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.812 [2024-05-15 00:08:25.276705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.277128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.277476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.277492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.277913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.278335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.278351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.278689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.279036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.279053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.279256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.279600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.279616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.280010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.280372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.280389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.280794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.281153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.281169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.281575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.281884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.281900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.282347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.282710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.282726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.283080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.283432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.283448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.283850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.284198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.284214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.284664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.285082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.285098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.285387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.285712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.285729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.286129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.286460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.286477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.286673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.287093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.287110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.287471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.287827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.287843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.288230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.288558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.288574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.288931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.289302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.289318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.289758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.290083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.290100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.290381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.290760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.290776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.291063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.291359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.291376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.291727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.292021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.292037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.292463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.292802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.292818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.293100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.293496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.293513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.293937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.294270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.294287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.294708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.295039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.295055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.295478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.295823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.295839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.296035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.296437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.296453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.296813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.297073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.297089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.297469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.297921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.813 [2024-05-15 00:08:25.297937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.813 qpair failed and we were unable to recover it. 00:26:24.813 [2024-05-15 00:08:25.298291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.298640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.298657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.298990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.299349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.299365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.299649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.300047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.300063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.300524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.300683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.300699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.301049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.301326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.301343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.301691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.301966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.301982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.302281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.302614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.302630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.302896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.303249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.303265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.303608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.303958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.303974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.304307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.304637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.304653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.304995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.305339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.305355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.305760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.306180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.306202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.306535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.306865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.306881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.307240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.307573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.307589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.307954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.308307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.308324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.308730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.309151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.309167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.309596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.309948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.309964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.310393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.310680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.310696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.311053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.311347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.311363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.311769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.312167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.312183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.312633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.312964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.312981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.313335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.313671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.313687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.314070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.314345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.314362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.314788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.315169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.315185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.315565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.315903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.315920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.814 [2024-05-15 00:08:25.316351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.316545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.814 [2024-05-15 00:08:25.316561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.814 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.316965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.317321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.317337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.317639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.318036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.318052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.318434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.318562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.318579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.318855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.319278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.319295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.319723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.320075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.320092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.320496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.320837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.320854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.321259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.321609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.321625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.321969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.322392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.322409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.322736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.323085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.323101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.323471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.323849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.323865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.324271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.324620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.324636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.324975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.325386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.325402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.325757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.326051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.326068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.326475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.326916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.326933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.327285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.327624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.327641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.327998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.328423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.328441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.328799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.329160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.329177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.329461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.329790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.329806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.330144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.330419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.330436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.330770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.331067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.331084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.331388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.331809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.331825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.332107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.332511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.332528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.332874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.333230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.333247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.333597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.333942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.333958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.334242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.334567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.334583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.335021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.335356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.335373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.335753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.336102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.336118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.336400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.336827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.336843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.337261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.337604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.337620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.815 qpair failed and we were unable to recover it. 00:26:24.815 [2024-05-15 00:08:25.337973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.815 [2024-05-15 00:08:25.338339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.338355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.338710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.339072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.339088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.339447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.339858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.339874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.340276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.340616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.340633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.340910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.341279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.341719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.342087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.342103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.342528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.342947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.342963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.343322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.343671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.343687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.344064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.344487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.344504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.344921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.345269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.345285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.345686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.346112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.346129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.346425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.346716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.346732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.347082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.347529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.347546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.347975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.348304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.348321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.348601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.348940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.348956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.349307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.349726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.349743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.350144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.350416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.350433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.350787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.351075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.351091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.351515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.351808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.351825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.352183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.352609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.352626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.353035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.353410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.353427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.353730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.354096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.354401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.354745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.354762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.355180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.355595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.355613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.356042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.356319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.356336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.356683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.357081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.357097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.357468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.357816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.357832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.358256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.358590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.358606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.358952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.359103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.359119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.816 qpair failed and we were unable to recover it. 00:26:24.816 [2024-05-15 00:08:25.359476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.359761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.816 [2024-05-15 00:08:25.359777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.360109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.360446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.360465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.360818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.361103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.361119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.361458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.361756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.361772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.362076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.362473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.362490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.362844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.363069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.363085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.363489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.363792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.363808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.364209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.364649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.364665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.365016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.365435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.365452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.365789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.366119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.366135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.366511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.366810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.366826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.367208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.367627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.367645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.368018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.368368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.368384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.368718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.369051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.369067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.369381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.369811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.369827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.370253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.370620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.370636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.371040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.371440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.371457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.371740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.372096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.372112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.372493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.372900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.372917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.373183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.373544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.373560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.373853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.374220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.374237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.374526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.374949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.374967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.375350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.375771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.375787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.376137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.376571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.376588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.377005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.377271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.377288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.377715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.378111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.378127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.378502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.378769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.378786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.379072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.379476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.379492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.379899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.380297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.380314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.817 [2024-05-15 00:08:25.380580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.380839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.817 [2024-05-15 00:08:25.380855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.817 qpair failed and we were unable to recover it. 00:26:24.818 [2024-05-15 00:08:25.381261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.381558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.381574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.818 qpair failed and we were unable to recover it. 00:26:24.818 [2024-05-15 00:08:25.382006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.382357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.382373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.818 qpair failed and we were unable to recover it. 00:26:24.818 [2024-05-15 00:08:25.382749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.383096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.383113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.818 qpair failed and we were unable to recover it. 00:26:24.818 [2024-05-15 00:08:25.383396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.383669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.383685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:24.818 qpair failed and we were unable to recover it. 00:26:24.818 [2024-05-15 00:08:25.384041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.818 [2024-05-15 00:08:25.384464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.384481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.384862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.385200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.385217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.385634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.386043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.386059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.386405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.386778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.386793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.387079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.387375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.387392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.387797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.388140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.388156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.388456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.388827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.388843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.084 [2024-05-15 00:08:25.389114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.389458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.084 [2024-05-15 00:08:25.389474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.084 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.389776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.390141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.390157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.390499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.390899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.390915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.391252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.391542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.391558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.391893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.392243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.392260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.392678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.393031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.393047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.393382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.393543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.393559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.393962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.394308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.394324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.394666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.395008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.395024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.395372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.395638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.395654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.395993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.396329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.396346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.396753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.397044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.397060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.397413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.397752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.397768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.398056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.398477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.398494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.398846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.399269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.399286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.399624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.399988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.400271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.400485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.400501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.400850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.401194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.401210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.401559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.401850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.401866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.402304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.402586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.402602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.402947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.403239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.403256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.403553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.403953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.403969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.404281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.404712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.404728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.404993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.405390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.405406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.405544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.405830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.405846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.406211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.406478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.406493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.406837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.407177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.407199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.407653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.408073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.408090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.408388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.408730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.408747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.085 [2024-05-15 00:08:25.409105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.409394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.085 [2024-05-15 00:08:25.409410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.085 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.409757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.410110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.410126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.410403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.410739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.410755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.411123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.411531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.411548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.411955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.412288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.412304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.412644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.413000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.413016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.413401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.413685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.413701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.414023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.414323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.414340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.414792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.415168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.415184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.415599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.415975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.415992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.416303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.416649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.416666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.417027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.417378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.417394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.417739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.418021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.418037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.418390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.418732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.418748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.419114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.419499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.419515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.419806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.420156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.420172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.420566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.420844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.420860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.421253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.421629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.421645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.422049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.422388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.422404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.422753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.423081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.423098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.423488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.423838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.423854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.424313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.424718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.424735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.425142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.425488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.425504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.425869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.426222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.426239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.426597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.426949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.426965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.427336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.427734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.427750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.428107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.428447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.428464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.428870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.429215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.429231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.429582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.429930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.429946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.430283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.430643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.430659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.086 qpair failed and we were unable to recover it. 00:26:25.086 [2024-05-15 00:08:25.431012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.086 [2024-05-15 00:08:25.431346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.431363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.431672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.432131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.432147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.432514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.432822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.432842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.433186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.433566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.433583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.433932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.434229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.434246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.434651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.434924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.434940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.435375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.435811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.435827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.436172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.436309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.436325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.436624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.436998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.437014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.437295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.437628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.437644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.437907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.438329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.438346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.438494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.438819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.438836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.439240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.439523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.439539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.439829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.440169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.440185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.440591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.440734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.440750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.441113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.441410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.441427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.441816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.442139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.442155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.442574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.442946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.442962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.443387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.443665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.443682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.444036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.444405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.444422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.444557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.444915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.444931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.445272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.445617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.445633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.445968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.446234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.446253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.446609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.446883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.446899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.447232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.447501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.447518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.447898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.448267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.448284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.448650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.449019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.449035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.449375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.449716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.449733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.450148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.450486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.450503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.450852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.451176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.451197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.087 qpair failed and we were unable to recover it. 00:26:25.087 [2024-05-15 00:08:25.451623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.087 [2024-05-15 00:08:25.451908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.451924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.452186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.452613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.452630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.452902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.453099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.453115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.453482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.453809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.453825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.454226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.454517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.454534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.454937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.455210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.455226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.455593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.455944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.455960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.456298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.456628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.456645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.457096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.457440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.457457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.457863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.458295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.458311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.458665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.459017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.459034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.459176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.459545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.459562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.459934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.460226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.460242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.460665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.460950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.460966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.461328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.461731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.461747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.462083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.462451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.462467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.462835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.463182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.463201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.463580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.463907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.463923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.464316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.464613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.464629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.464956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.465246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.465262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.465550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.465974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.465990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.466408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.466742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.466758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.467106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.467451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.467467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.467828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.468174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.468196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.468545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.468880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.468896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.469343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.469696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.469712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.470072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.470352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.470369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.470713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.471051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.471067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.471441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.471851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.471868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.088 [2024-05-15 00:08:25.472204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.472641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.088 [2024-05-15 00:08:25.472658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.088 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.472941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.473357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.473374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.473722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.474088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.474104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.474454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.474825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.474841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.475214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.475496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.475512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.475882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.476174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.476194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.476486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.476851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.476867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.477167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.477616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.477633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.477911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.478281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.478298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.478689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.478960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.478977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.479122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.479473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.479490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.479894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.480245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.480261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.480609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.480893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.480909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.481246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.481590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.481606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.481985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.482347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.482364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.482783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.483205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.483221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.483579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.483845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.483862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.484221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.484627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.484643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.485042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.485397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.485414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.485552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.485965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.485981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.486338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.486675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.486691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.486983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.487252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.487268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.487659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.488008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.488025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.488368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.488724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.488740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.489078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.489371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.489388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.089 qpair failed and we were unable to recover it. 00:26:25.089 [2024-05-15 00:08:25.489758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.490043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.089 [2024-05-15 00:08:25.490060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.490339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.490705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.490722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.490999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.491349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.491365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.491730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.492137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.492153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.492588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.492870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.492886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.493051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.493408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.493424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.493830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.494187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.494207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.494608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.494935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.494951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.495332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.495775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.495791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.496111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.496456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.496473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.496825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.497253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.497270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.497615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.498020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.498037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.498317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.498771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.498787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.499201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.499602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.499619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.499985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.500406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.500424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.500781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.501137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.501153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.501486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.501822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.501838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.502243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.502363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.502379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.502728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.503075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.503091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.503439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.503791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.503808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.504168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.504522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.504538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.504844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.505267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.505284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.505689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.506023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.506039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.506347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.506681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.506697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.507046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.507391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.507407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.507738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.508072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.508088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.508450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.508865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.508882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.509236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.509441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.509457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.509797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.510204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.510221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.510506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.510923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.510942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.090 qpair failed and we were unable to recover it. 00:26:25.090 [2024-05-15 00:08:25.511296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.090 [2024-05-15 00:08:25.511603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.511620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.512062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.512292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.512308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.512647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.512999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.513016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.513346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.513687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.513703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.513979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.514346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.514363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.514735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.515136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.515152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.515501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.515837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.515854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.516272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.516409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.516425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.516840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.517180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.517208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.517612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.517904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.517923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.518348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.518539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.518555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.518854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.519195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.519212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.519561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.519911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.519928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.520258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.520589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.520604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.520960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.521259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.521275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.521620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.522028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.522045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.522470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.522771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.522787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.522999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.523286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.523302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.523575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.523935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.523951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.524335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.524680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.524699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.525058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.525457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.525474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.525824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.526247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.526263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.526610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.527006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.527022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.527379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.527805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.527821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.528246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.528652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.528668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.529019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.529419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.529436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.529719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.530062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.530079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.530424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.530779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.530796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.531143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.531489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.531505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.091 [2024-05-15 00:08:25.531857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.532214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.091 [2024-05-15 00:08:25.532232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.091 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.532585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.532936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.532952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.533228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.533565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.533581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.533922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.534277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.534294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.534711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.535069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.535085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.535444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.535750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.535766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.536031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.536362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.536378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.536733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.537054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.537070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.537441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.537780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.537796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.538073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.538467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.538483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.538834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.539208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.539224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.539604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.539967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.539983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.540280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.540545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.540561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.540909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.541253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.541269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.541601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.541952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.541968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.542266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.542678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.542694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.542974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.543302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.543319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.543667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.543950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.543966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.544372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.544742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.544759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.545131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.545564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.545580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.545982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.546381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.546398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.546740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.547036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.547052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.547408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.547759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.547775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.548054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.548403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.548420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.548764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.549038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.549054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.549334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.549668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.549684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.550031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.550312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.550329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.550626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.551052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.551069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.551433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.551780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.551796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.552137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.552411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.552427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.092 qpair failed and we were unable to recover it. 00:26:25.092 [2024-05-15 00:08:25.552770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.092 [2024-05-15 00:08:25.553155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.553171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.553456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.553723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.553740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.554078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.554500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.554516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.554801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.555202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.555219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.555644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.555912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.555929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.556244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.556598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.556614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.556903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.557257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.557274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.557653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.558015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.558033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.558333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.558634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.558650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.558992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.559396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.559412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.559784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.560150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.560166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.560577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.560862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.560879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.561285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.561687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.561703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.561985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.562324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.562341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.562771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.563114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.563130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.563477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.563905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.563921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.564261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.564671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.564688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.565032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.565405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.565421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.565825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.566118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.566133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.566420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.566793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.566809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.567104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.567382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.567398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.567753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.568040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.568056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.568401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.568696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.568712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.569121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.569403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.569419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.569698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.570041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.570058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.570354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.570700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.570717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.093 qpair failed and we were unable to recover it. 00:26:25.093 [2024-05-15 00:08:25.570986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.093 [2024-05-15 00:08:25.571349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.571365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.571629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.571978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.571994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.572271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.572560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.572576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.572845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.573107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.573123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.573471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.573759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.573776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.574066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.574351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.574368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.574660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.575084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.575100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.575388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.575733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.575749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.576100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.576398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.576414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.576755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.577036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.577051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.577214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.577547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.577564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.577859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.578204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.578221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.578626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.579029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.579046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.579408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.579699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.579715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.580055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.580320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.580336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.580669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.580998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.581014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.581387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.581732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.581748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.582023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.582324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.582340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.582712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.583066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.583083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.583491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.583762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.583779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.584198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.584523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.584540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.584955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.585232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.585249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.585591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.585937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.585953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.586231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.586508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.586524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.586952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.587305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.587322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.587694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.588177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.588203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.588563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.589013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.589029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.589326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.589674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.589691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.589990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.590389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.590406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.094 qpair failed and we were unable to recover it. 00:26:25.094 [2024-05-15 00:08:25.590691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.591109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.094 [2024-05-15 00:08:25.591125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.591550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.591898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.591915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.592278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.592686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.592702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.592976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.593122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.593138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.593541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.593818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.593834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.594173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.594594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.594611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.594968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.595248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.595265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.595634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.595969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.595985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.596338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.596679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.596696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.597128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.597530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.597547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.597911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.598183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.598204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.598479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.598853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.598869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.599296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.599737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.599754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.600046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.600398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.600416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.600776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.601061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.601077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.601495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.601830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.601846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.602204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.602540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.602559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.602834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.603189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.603212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.603490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.603762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.603779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.604059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.604418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.604434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.604840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.605106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.605122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.605534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.605935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.605951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.606308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.606570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.606587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.606992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.607278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.607295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.607710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.608120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.608136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.608497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.608898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.608914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.609205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.609554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.609570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.609857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.610189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.610209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.610563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.610983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.610999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.611402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.611828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.611844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.095 qpair failed and we were unable to recover it. 00:26:25.095 [2024-05-15 00:08:25.612150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.612316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.095 [2024-05-15 00:08:25.612332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.612685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.612946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.612963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.613392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.613740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.613757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.614094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.614394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.614411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.614693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.615000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.615017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.615391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.615728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.615744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.615978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.616250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.616267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.616603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.616966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.616982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.617320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.617630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.617647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.618101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.618478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.618495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.618769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.619132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.619148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.619552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.619991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.620007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.620394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.620687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.620703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.621080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.621499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.621516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.621861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.622164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.622180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.622502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.622864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.622880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.623309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.623668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.623684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.623960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.624308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.624326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.624687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.625039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.625055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.625408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.625780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.625797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.626076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.626271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.626288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.626665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.627008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.627025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.627372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.627652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.627669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.628025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.628382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.628401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.628679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.629096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.629113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.629469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.629778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.629794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.630140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.630428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.630445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.630854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.631217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.631234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.631588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.631916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.631933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.632278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.632629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.632646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.096 qpair failed and we were unable to recover it. 00:26:25.096 [2024-05-15 00:08:25.632973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.096 [2024-05-15 00:08:25.633339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.633357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.633701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.634034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.634050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.634411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.634773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.634789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.635142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.635477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.635493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.635784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.636237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.636253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.636562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.636965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.636981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.637263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.637680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.637696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.638118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.638420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.638436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.638841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.638971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.638987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.639340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.639685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.639701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.640108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.640435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.640451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.640828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.641205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.641221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.641644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.642045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.642062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.642466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.642820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.642836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.643206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.643575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.643591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.643995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.644321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.644338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.644741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.645141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.645157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.645547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.645972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.645989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.646347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.646665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.646681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.646834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.647118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.647134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.647462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.647728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.647744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.648085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.648430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.648447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.648746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.649131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.649147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.649355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.649727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.649743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.650147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.650441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.650458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.650887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.651078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.651094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.651451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.651899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.651915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.652340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.652538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.652555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.097 [2024-05-15 00:08:25.652901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.653277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.097 [2024-05-15 00:08:25.653293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.097 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.653584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.653725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.653741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.654092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.654422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.654439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.654733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.655065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.655082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.655444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.655859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.655875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.656178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.656528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.656545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.656884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.657164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.657180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.657523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.657851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.657867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.658209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.658497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.658514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.658878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.659215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.659232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.659520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.659882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.659903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.660188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.660539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.660556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.660827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.661228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.661244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.661508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.661910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.661926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.662276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.662559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.662576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.662858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.663206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.663222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.663564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.663931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.663948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.664105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.664377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.664394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.664745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.665082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.665098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.665253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.665599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.665619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.665954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.666361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.666378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.666504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.666861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.666877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.098 [2024-05-15 00:08:25.667090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.667441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.098 [2024-05-15 00:08:25.667457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.098 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.667823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.668247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.668264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.668395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.668540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.668557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.668901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.669246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.669265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.669611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.670016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.670032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.670377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.670721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.670738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.671083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.671508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.671525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.671953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.672284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.672303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.672660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.673011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.673027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.673316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.673678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.673695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.674118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.674464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.674480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.674755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.675156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.675174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.675329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.675629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.675646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.675943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.676273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.676290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.676578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.676874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.676890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.677255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.677624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.677641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.677993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.678345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.678362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.678746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.679117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.679136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.679473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.679752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.679768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.680142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.680428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.680445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.680855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.681177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.681199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.681502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.681841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.681857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.682216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.682365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.682382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.682786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.682978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.682994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.683187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.683483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.683500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.363 qpair failed and we were unable to recover it. 00:26:25.363 [2024-05-15 00:08:25.683839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.363 [2024-05-15 00:08:25.684240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.684257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.684485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.684776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.684793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.685129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.685465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.685482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.685773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.686106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.686122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.686465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.686588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.686604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.687021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.687391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.687407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.687749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.688118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.688134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.688482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.688826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.688842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.689145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.689505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.689521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.689793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.690065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.690082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.690424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.690700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.690716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.691019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.691361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.691379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.691647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.691989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.692006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.692356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.692512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.692528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.692942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.693232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.693249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.693654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.693987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.694012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.694316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.694479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.694495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.694833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.695166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.695182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.695472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.695821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.695838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.696189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.696475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.696491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.696895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.697089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.697105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.697445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.697803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.697820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.698172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.698449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.698467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.698613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.698899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.698915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.699265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.699666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.699682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.699959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.700324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.700341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.700629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.700973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.700990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.701411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.701705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.701722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.702026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.702429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.702446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.702848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.703128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.364 [2024-05-15 00:08:25.703144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.364 qpair failed and we were unable to recover it. 00:26:25.364 [2024-05-15 00:08:25.703589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.703917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.703934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.704199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.704540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.704557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.704901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.705245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.705262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.705613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.706014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.706031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.706314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.706728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.706744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.707043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.707378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.707395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.707761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.708105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.708122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.708455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.708855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.708871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.709211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.709488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.709505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.709908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.710262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.710278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.710718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.711063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.711079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.711428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.711856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.711872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.712145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.712480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.712496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.712782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.713070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.713086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.713386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.713678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.713695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.714028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.714301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.714317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.714722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.714999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.715015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.715442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.715777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.715794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.716049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.716400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.716417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.716756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.717045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.717061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.717356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.717700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.717716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.718015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.718367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.718383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.718732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.719078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.719094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.719307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.719725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.719741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.720171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.720454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.720471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.720807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.721101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.721118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.721468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.721743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.721759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.722161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.722528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.722545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.722826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.723180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.723201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.365 qpair failed and we were unable to recover it. 00:26:25.365 [2024-05-15 00:08:25.723477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.365 [2024-05-15 00:08:25.723754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.723770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.724104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.724406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.724423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.724851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.725182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.725201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.725556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.725840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.725856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.726140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.726392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.726409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.726688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.727035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.727051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.727398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.727751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.727767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.727919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.728355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.728372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.728663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.728801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.728817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.729118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.729463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.729480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.729884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.730304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.730320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.730695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.731040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.731056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.731470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.731823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.731840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.732100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.732376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.732393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.732753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.733088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.733104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.733441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.733717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.733734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.733933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.734313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.734329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.734468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.734772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.734789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.735157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.735556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.735573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.735849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.736111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.736127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.736537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.736810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.736827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.737144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.737486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.737502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.737842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.738201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.738217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.738549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.738950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.738966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.739327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.739607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.739624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.739883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.740226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.740243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.740626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.740911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.740927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.741286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.741628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.741645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.741947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.742316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.742333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.366 [2024-05-15 00:08:25.742711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.742983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.366 [2024-05-15 00:08:25.743001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.366 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.743267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.743460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.743476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.743885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.744290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.744307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.744594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:25.367 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:25.367 [2024-05-15 00:08:25.745017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.745034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.745310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.367 [2024-05-15 00:08:25.745585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.745602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.367 [2024-05-15 00:08:25.745887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.367 [2024-05-15 00:08:25.746251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.746267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.746671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.746965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.746982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.747159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.747515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.747532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.747827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.748028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.748046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.748383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.748641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.748657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.749087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.749368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.749387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.749730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.750133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.750150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.750499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.750789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.750805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.751173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.751443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.751460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.751822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.752160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.752177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.752488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.752913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.752930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.753369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.753717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.753733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.754089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.754429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.754446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.754788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.755081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.755097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.755478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.755774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.755791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.756194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.756497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.756514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.757169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.757185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.757539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.757891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.757908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.758251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.758603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.367 [2024-05-15 00:08:25.758619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.367 qpair failed and we were unable to recover it. 00:26:25.367 [2024-05-15 00:08:25.759040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.759261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.759278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.759575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.759934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.759951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.760307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.760659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.760675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.760957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.761238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.761255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.761557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.761831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.761848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.762199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.762350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.762366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.762568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.762850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.762866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.763212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.763487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.763504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.763839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.764175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.764195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.764549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.764813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.764829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.765132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.765439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.765456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.765870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.766147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.766164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.766371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.766714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.766730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.767014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.767283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.767300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.767628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.767922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.767938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.768204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.768566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.768582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.768939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.769235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.769252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.769534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.769755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.769771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.770047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.770331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.770348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.770627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.770907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.770923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.771224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.771584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.771600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.771747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.772010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.772027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.772405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.772752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.772769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.773146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.773437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.773453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.773808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.774148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.774165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.774524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.774812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.774829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.775116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.775263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.775281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.775579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.775920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.775936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.776521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.776538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.368 [2024-05-15 00:08:25.776804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.777069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.368 [2024-05-15 00:08:25.777085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.368 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.777427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.777683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.777699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.777973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.778258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.778275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.778679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.778948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.778964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.779235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.779387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.779404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.779686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.779976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.779992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.780278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.780541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.780558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.780918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.781207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.781224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.781529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.781870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.781887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.782177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.782453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.782469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.782752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.783097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.783114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.783411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.783696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.783715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.784073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.784477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.784495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.784785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.785123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.785140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.785437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.785713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.785729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.786099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.786445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.786463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.786732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.786993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.787010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.787362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.787706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.787724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.788084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.788355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.788373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.788660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.789005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.789021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.789154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.369 [2024-05-15 00:08:25.789503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.789521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.369 [2024-05-15 00:08:25.789824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.369 [2024-05-15 00:08:25.790121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.790138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.369 [2024-05-15 00:08:25.790482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.790747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.790763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.791045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.791329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.791346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.791683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.791959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.791976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.792385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.792586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.792602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.792865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.793148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.793164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.793444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.793718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.793734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.794006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.794358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.794375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.369 qpair failed and we were unable to recover it. 00:26:25.369 [2024-05-15 00:08:25.794723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.369 [2024-05-15 00:08:25.795061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.795077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.795422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.795701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.795717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.795987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.796329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.796346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.796640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.797038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.797055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.797333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.797615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.797631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.797891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.798168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.798185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.798541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.798815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.798832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.799178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.799519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.799535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.799876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.800153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.800169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.800530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.800794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.800811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.801166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.801510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.801528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.801811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.801944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.801961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.802239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.802505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.802523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.802790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.803164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.803182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.803525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.803789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.803807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.804213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.804594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.804613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.804881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.805245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.805265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.805680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.805965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.805982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.806265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.806552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.806568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.806851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.807183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.807204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.807474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.807679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.807696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.807997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.808364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.808384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 Malloc0 00:26:25.370 [2024-05-15 00:08:25.808723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.809001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.809280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.370 [2024-05-15 00:08:25.809557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.809574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:25.370 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.370 [2024-05-15 00:08:25.810026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.370 [2024-05-15 00:08:25.810362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.810379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.810767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.811121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.811137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.811407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.811559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.811575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.812020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.812294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.812310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.812665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.813000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.370 [2024-05-15 00:08:25.813016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.370 qpair failed and we were unable to recover it. 00:26:25.370 [2024-05-15 00:08:25.813441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.813706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.813722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.814148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.814495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.814514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.814860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.815220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.815237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.815529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.815818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.815834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.816079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.371 [2024-05-15 00:08:25.816115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.816320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.816336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.816748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.817033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.817049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.817501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.817752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.817768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.818045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.818171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.818187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.818383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.818664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.818680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.819008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.819428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.819445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.819849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.820274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.820291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.820557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.820894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.820910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.821251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.821658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.821674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.822101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.822254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.822270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.822558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.822922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.822939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.823315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.823678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.823694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.824065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.824350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.824367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.824791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.371 [2024-05-15 00:08:25.825051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.825068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.371 [2024-05-15 00:08:25.825328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.371 [2024-05-15 00:08:25.825623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.825640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.371 [2024-05-15 00:08:25.825861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.826198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.826214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.826571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.827002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.827018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.827363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.827713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.827729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.828108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.828396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.828413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.828820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.829160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.829176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.829390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.829730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.829746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.830015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.830361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.830378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.371 qpair failed and we were unable to recover it. 00:26:25.371 [2024-05-15 00:08:25.830731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.371 [2024-05-15 00:08:25.831015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.831031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.831310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.831643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.831660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.831996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.832343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.832360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.832655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.372 [2024-05-15 00:08:25.832999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.833015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.372 [2024-05-15 00:08:25.833308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.372 [2024-05-15 00:08:25.833737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.833754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b560 with addr=10.0.0.2, port=4420 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.834201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.834584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.834601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.834888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.835244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.835261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.835609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.835887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.835903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.836261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.836687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.836703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.837097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.837428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.837445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.837791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.838155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.838171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.838456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.838884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.838900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.839254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.839608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.839624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.840071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.840429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.840446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.840755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.372 [2024-05-15 00:08:25.841183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.841204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.372 [2024-05-15 00:08:25.841548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.372 [2024-05-15 00:08:25.841928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.841945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.842304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.842658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.842674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.842981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.843275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.843291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.843698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.843996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.844012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7090000b90 with addr=10.0.0.2, port=4420 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 [2024-05-15 00:08:25.844101] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:25.372 [2024-05-15 00:08:25.844351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.372 [2024-05-15 00:08:25.844351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.372 [2024-05-15 00:08:25.847426] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:25.372 [2024-05-15 00:08:25.847476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f7090000b90 (107): Transport endpoint is not connected 00:26:25.372 [2024-05-15 00:08:25.847531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.372 [2024-05-15 00:08:25.856675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.372 [2024-05-15 00:08:25.856817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.372 [2024-05-15 00:08:25.856838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.372 [2024-05-15 00:08:25.856850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.372 [2024-05-15 00:08:25.856861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.372 [2024-05-15 00:08:25.856883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.372 qpair failed and we were unable to recover it. 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.372 00:08:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 3731258 00:26:25.372 [2024-05-15 00:08:25.866716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.866827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.866846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.866856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.866864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.866884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.876565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.876685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.876705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.876715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.876724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.876743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.886612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.886727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.886747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.886757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.886765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.886784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.896648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.896760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.896778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.896788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.896797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.896815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.906754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.906860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.906878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.906888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.906897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.906915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.916790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.916907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.916925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.916935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.916943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.916962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.926805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.927071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.927090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.927099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.927107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.927126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.936816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.936924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.936947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.936957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.936966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.936985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.373 [2024-05-15 00:08:25.946753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.373 [2024-05-15 00:08:25.946862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.373 [2024-05-15 00:08:25.946880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.373 [2024-05-15 00:08:25.946890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.373 [2024-05-15 00:08:25.946898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.373 [2024-05-15 00:08:25.946916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.373 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:25.956836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:25.956947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:25.956965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:25.956974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:25.956983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:25.957001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:25.966919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:25.967082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:25.967100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:25.967109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:25.967118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:25.967136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:25.976973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:25.977093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:25.977111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:25.977120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:25.977129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:25.977150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:25.986964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:25.987072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:25.987090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:25.987100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:25.987108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:25.987126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:25.997034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:25.997294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:25.997313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:25.997323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:25.997331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:25.997351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:26.007008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:26.007117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:26.007135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.633 [2024-05-15 00:08:26.007145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.633 [2024-05-15 00:08:26.007153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.633 [2024-05-15 00:08:26.007172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.633 qpair failed and we were unable to recover it. 00:26:25.633 [2024-05-15 00:08:26.017078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.633 [2024-05-15 00:08:26.017213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.633 [2024-05-15 00:08:26.017232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.017241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.017250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.017268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.027068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.027173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.027198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.027209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.027217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.027235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.037082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.037204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.037223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.037232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.037241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.037259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.047035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.047147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.047166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.047176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.047185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.047211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.057197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.057308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.057327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.057336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.057345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.057363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.067092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.067204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.067222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.067231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.067243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.067262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.077198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.077310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.077329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.077338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.077347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.077366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.087239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.087351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.087369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.087378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.087387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.087405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.097311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.097424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.097442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.097451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.097460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.097478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.107435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.107551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.107569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.107579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.107588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.107606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.117297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.117417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.117435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.117445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.117453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.117471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.127418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.127555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.127573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.127582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.127591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.127609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.137465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.137569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.137588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.137598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.137606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.137625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.147442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.634 [2024-05-15 00:08:26.147551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.634 [2024-05-15 00:08:26.147570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.634 [2024-05-15 00:08:26.147579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.634 [2024-05-15 00:08:26.147588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.634 [2024-05-15 00:08:26.147606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.634 qpair failed and we were unable to recover it. 00:26:25.634 [2024-05-15 00:08:26.157446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.635 [2024-05-15 00:08:26.157558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.635 [2024-05-15 00:08:26.157576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.635 [2024-05-15 00:08:26.157586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.635 [2024-05-15 00:08:26.157597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.635 [2024-05-15 00:08:26.157616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.635 qpair failed and we were unable to recover it. 00:26:25.635 [2024-05-15 00:08:26.167504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.635 [2024-05-15 00:08:26.167616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.635 [2024-05-15 00:08:26.167634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.635 [2024-05-15 00:08:26.167644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.635 [2024-05-15 00:08:26.167653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.635 [2024-05-15 00:08:26.167671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.635 qpair failed and we were unable to recover it. 00:26:25.635 [2024-05-15 00:08:26.177525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.635 [2024-05-15 00:08:26.177636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.635 [2024-05-15 00:08:26.177654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.635 [2024-05-15 00:08:26.177663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.635 [2024-05-15 00:08:26.177672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.635 [2024-05-15 00:08:26.177691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.635 qpair failed and we were unable to recover it. 00:26:25.635 [2024-05-15 00:08:26.187553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.635 [2024-05-15 00:08:26.187661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.635 [2024-05-15 00:08:26.187679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.635 [2024-05-15 00:08:26.187688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.635 [2024-05-15 00:08:26.187697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.635 [2024-05-15 00:08:26.187715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.635 qpair failed and we were unable to recover it. 00:26:25.635 [2024-05-15 00:08:26.197572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.635 [2024-05-15 00:08:26.197682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.635 [2024-05-15 00:08:26.197700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.635 [2024-05-15 00:08:26.197710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.635 [2024-05-15 00:08:26.197719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7090000b90 00:26:25.635 [2024-05-15 00:08:26.197737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.635 qpair failed and we were unable to recover it. 00:26:25.635 [2024-05-15 00:08:26.197760] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:25.635 A controller has encountered a failure and is being reset. 00:26:25.635 Controller properly reset. 00:26:30.918 Initializing NVMe Controllers 00:26:30.918 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:30.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:30.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:30.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:30.918 Initialization complete. Launching workers. 00:26:30.918 Starting thread on core 1 00:26:30.918 Starting thread on core 2 00:26:30.918 Starting thread on core 3 00:26:30.918 Starting thread on core 0 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:26:30.918 00:26:30.918 real 0m11.221s 00:26:30.918 user 0m33.531s 00:26:30.918 sys 0m6.034s 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.918 ************************************ 00:26:30.918 END TEST nvmf_target_disconnect_tc2 00:26:30.918 ************************************ 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.918 rmmod nvme_tcp 00:26:30.918 rmmod nvme_fabrics 00:26:30.918 rmmod nvme_keyring 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3731865 ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3731865 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3731865 ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3731865 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3731865 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3731865' 00:26:30.918 killing process with pid 3731865 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3731865 00:26:30.918 [2024-05-15 00:08:31.242292] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3731865 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.918 00:08:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.452 00:08:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.452 00:26:33.452 real 0m21.038s 00:26:33.452 user 1m0.039s 00:26:33.452 sys 0m12.427s 00:26:33.452 00:08:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:33.452 00:08:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:33.452 ************************************ 00:26:33.452 END TEST nvmf_target_disconnect 00:26:33.452 ************************************ 00:26:33.452 00:08:33 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:26:33.452 00:08:33 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.452 00:08:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.452 00:08:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:26:33.452 00:26:33.452 real 20m7.598s 00:26:33.452 user 41m19.905s 00:26:33.452 sys 7m40.230s 00:26:33.452 00:08:33 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:33.452 00:08:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.452 ************************************ 00:26:33.452 END TEST nvmf_tcp 00:26:33.452 ************************************ 00:26:33.452 00:08:33 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:26:33.452 00:08:33 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:33.452 00:08:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:33.452 00:08:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:33.452 00:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:33.452 ************************************ 00:26:33.452 START TEST spdkcli_nvmf_tcp 00:26:33.452 ************************************ 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:33.452 * Looking for test storage... 00:26:33.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.452 00:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3733591 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3733591 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3733591 ']' 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:33.453 00:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.453 [2024-05-15 00:08:33.940518] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:26:33.453 [2024-05-15 00:08:33.940567] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733591 ] 00:26:33.453 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.453 [2024-05-15 00:08:34.007113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.712 [2024-05-15 00:08:34.080604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.712 [2024-05-15 00:08:34.080607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.279 00:08:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:34.279 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:34.279 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:34.279 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:34.279 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:34.279 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:34.279 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:34.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:34.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:34.279 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:34.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:34.280 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:34.280 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:34.280 ' 00:26:36.818 [2024-05-15 00:08:37.147684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.753 [2024-05-15 00:08:38.323347] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:37.753 [2024-05-15 00:08:38.323759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:40.286 [2024-05-15 00:08:40.486438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:42.191 [2024-05-15 00:08:42.344144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:43.570 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:43.570 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:43.570 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:43.571 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:43.571 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:43.571 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:43.571 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:43.571 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:43.571 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:43.571 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:43.571 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:43.571 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:43.571 00:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.830 00:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:43.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:43.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:43.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:43.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:43.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:43.830 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:43.830 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:43.830 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:43.830 ' 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:49.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:49.103 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:49.103 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:49.103 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3733591 ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3733591' 00:26:49.103 killing process with pid 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3733591 00:26:49.103 [2024-05-15 00:08:49.411053] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3733591 ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3733591 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3733591 ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3733591 00:26:49.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3733591) - No such process 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3733591 is not found' 00:26:49.103 Process with pid 3733591 is not found 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:49.103 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:49.104 00:08:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:49.104 00:26:49.104 real 0m15.870s 00:26:49.104 user 0m32.641s 00:26:49.104 sys 0m0.864s 00:26:49.104 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:49.104 00:08:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.104 ************************************ 00:26:49.104 END TEST spdkcli_nvmf_tcp 00:26:49.104 ************************************ 00:26:49.104 00:08:49 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:49.104 00:08:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:49.104 00:08:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:49.104 00:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:49.364 ************************************ 00:26:49.364 START TEST nvmf_identify_passthru 00:26:49.364 ************************************ 00:26:49.364 00:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:49.364 * Looking for test storage... 00:26:49.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.364 00:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.364 00:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:49.364 00:08:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.364 00:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.364 00:08:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:49.364 00:08:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.364 00:08:49 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.364 00:08:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:56.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:56.000 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:56.000 Found net devices under 0000:af:00.0: cvl_0_0 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.000 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:56.001 Found net devices under 0000:af:00.1: cvl_0_1 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.001 00:08:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:26:56.001 00:26:56.001 --- 10.0.0.2 ping statistics --- 00:26:56.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.001 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:26:56.001 00:26:56.001 --- 10.0.0.1 ping statistics --- 00:26:56.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.001 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.001 00:08:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:26:56.001 00:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:d8:00.0 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:56.001 00:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:56.001 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.280 00:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:27:01.280 00:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:27:01.280 00:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:01.280 00:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:01.280 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3741401 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.474 00:09:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3741401 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3741401 ']' 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:05.474 00:09:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.474 [2024-05-15 00:09:05.720384] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:27:05.474 [2024-05-15 00:09:05.720439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.474 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.474 [2024-05-15 00:09:05.795541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.474 [2024-05-15 00:09:05.870128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.474 [2024-05-15 00:09:05.870167] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.474 [2024-05-15 00:09:05.870176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.474 [2024-05-15 00:09:05.870185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.474 [2024-05-15 00:09:05.870197] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.474 [2024-05-15 00:09:05.870242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.474 [2024-05-15 00:09:05.870347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.474 [2024-05-15 00:09:05.870430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.474 [2024-05-15 00:09:05.870432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:27:06.042 00:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 INFO: Log level set to 20 00:27:06.042 INFO: Requests: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "method": "nvmf_set_config", 00:27:06.042 "id": 1, 00:27:06.042 "params": { 00:27:06.042 "admin_cmd_passthru": { 00:27:06.042 "identify_ctrlr": true 00:27:06.042 } 00:27:06.042 } 00:27:06.042 } 00:27:06.042 00:27:06.042 INFO: response: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "id": 1, 00:27:06.042 "result": true 00:27:06.042 } 00:27:06.042 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.042 00:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 INFO: Setting log level to 20 00:27:06.042 INFO: Setting log level to 20 00:27:06.042 INFO: Log level set to 20 00:27:06.042 INFO: Log level set to 20 00:27:06.042 INFO: Requests: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "method": "framework_start_init", 00:27:06.042 "id": 1 00:27:06.042 } 00:27:06.042 00:27:06.042 INFO: Requests: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "method": "framework_start_init", 00:27:06.042 "id": 1 00:27:06.042 } 00:27:06.042 00:27:06.042 [2024-05-15 00:09:06.618121] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:06.042 INFO: response: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "id": 1, 00:27:06.042 "result": true 00:27:06.042 } 00:27:06.042 00:27:06.042 INFO: response: 00:27:06.042 { 00:27:06.042 "jsonrpc": "2.0", 00:27:06.042 "id": 1, 00:27:06.042 "result": true 00:27:06.042 } 00:27:06.042 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.042 00:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.042 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 INFO: Setting log level to 40 00:27:06.042 INFO: Setting log level to 40 00:27:06.042 INFO: Setting log level to 40 00:27:06.042 [2024-05-15 00:09:06.631517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.301 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.301 00:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:06.301 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.301 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:06.301 00:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:27:06.301 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.301 00:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.592 Nvme0n1 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.592 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.592 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.592 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.592 [2024-05-15 00:09:09.552843] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:09.592 [2024-05-15 00:09:09.553108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.592 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.592 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.592 [ 00:27:09.592 { 00:27:09.592 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:09.592 "subtype": "Discovery", 00:27:09.592 "listen_addresses": [], 00:27:09.592 "allow_any_host": true, 00:27:09.592 "hosts": [] 00:27:09.592 }, 00:27:09.592 { 00:27:09.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.592 "subtype": "NVMe", 00:27:09.592 "listen_addresses": [ 00:27:09.592 { 00:27:09.592 "trtype": "TCP", 00:27:09.592 "adrfam": "IPv4", 00:27:09.592 "traddr": "10.0.0.2", 00:27:09.592 "trsvcid": "4420" 00:27:09.592 } 00:27:09.592 ], 00:27:09.592 "allow_any_host": true, 00:27:09.592 "hosts": [], 00:27:09.592 "serial_number": "SPDK00000000000001", 00:27:09.593 "model_number": "SPDK bdev Controller", 00:27:09.593 "max_namespaces": 1, 00:27:09.593 "min_cntlid": 1, 00:27:09.593 "max_cntlid": 65519, 00:27:09.593 "namespaces": [ 00:27:09.593 { 00:27:09.593 "nsid": 1, 00:27:09.593 "bdev_name": "Nvme0n1", 00:27:09.593 "name": "Nvme0n1", 00:27:09.593 "nguid": "CC5EAB41E2B14913884BAE4F8FF04021", 00:27:09.593 "uuid": "cc5eab41-e2b1-4913-884b-ae4f8ff04021" 00:27:09.593 } 00:27:09.593 ] 00:27:09.593 } 00:27:09.593 ] 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:09.593 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:09.593 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:09.593 00:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.593 rmmod nvme_tcp 00:27:09.593 rmmod nvme_fabrics 00:27:09.593 rmmod nvme_keyring 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3741401 ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3741401 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3741401 ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3741401 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.593 00:09:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3741401 00:27:09.593 00:09:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:09.593 00:09:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:09.593 00:09:10 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3741401' 00:27:09.593 killing process with pid 3741401 00:27:09.593 00:09:10 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3741401 00:27:09.593 [2024-05-15 00:09:10.006657] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:09.593 00:09:10 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3741401 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.498 00:09:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.498 00:09:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.498 00:09:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.031 00:09:14 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.031 00:27:14.031 real 0m24.457s 00:27:14.031 user 0m32.936s 00:27:14.031 sys 0m6.168s 00:27:14.031 00:09:14 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:14.031 00:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:14.031 ************************************ 00:27:14.031 END TEST nvmf_identify_passthru 00:27:14.031 ************************************ 00:27:14.031 00:09:14 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:14.031 00:09:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:14.031 00:09:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:14.031 00:09:14 -- common/autotest_common.sh@10 -- # set +x 00:27:14.031 ************************************ 00:27:14.031 START TEST nvmf_dif 00:27:14.031 ************************************ 00:27:14.031 00:09:14 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:14.031 * Looking for test storage... 00:27:14.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.031 00:09:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.031 00:09:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:14.031 00:09:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.031 00:09:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.032 00:09:14 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.032 00:09:14 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.032 00:09:14 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.032 00:09:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.032 00:09:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.032 00:09:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.032 00:09:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:14.032 00:09:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.032 00:09:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:14.032 00:09:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:14.032 00:09:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:14.032 00:09:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:14.032 00:09:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.032 00:09:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:14.032 00:09:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.032 00:09:14 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.032 00:09:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:20.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:20.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:20.641 Found net devices under 0000:af:00.0: cvl_0_0 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:20.641 Found net devices under 0000:af:00.1: cvl_0_1 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:27:20.641 00:27:20.641 --- 10.0.0.2 ping statistics --- 00:27:20.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.641 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:20.641 00:09:20 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:27:20.641 00:27:20.641 --- 10.0.0.1 ping statistics --- 00:27:20.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.642 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:20.642 00:09:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.642 00:09:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:20.642 00:09:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:20.642 00:09:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:23.935 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:23.935 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.935 00:09:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:23.935 00:09:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3747440 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:23.935 00:09:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3747440 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3747440 ']' 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:23.935 00:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.935 [2024-05-15 00:09:24.157643] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:27:23.935 [2024-05-15 00:09:24.157687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.935 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.935 [2024-05-15 00:09:24.228436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.935 [2024-05-15 00:09:24.295262] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.935 [2024-05-15 00:09:24.295304] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.935 [2024-05-15 00:09:24.295313] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.935 [2024-05-15 00:09:24.295321] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.935 [2024-05-15 00:09:24.295344] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.935 [2024-05-15 00:09:24.295365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.503 00:09:24 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:24.503 00:09:24 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:27:24.503 00:09:24 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.503 00:09:24 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.503 00:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 00:09:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.503 00:09:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:24.503 00:09:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 [2024-05-15 00:09:25.008744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.503 00:09:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 ************************************ 00:27:24.503 START TEST fio_dif_1_default 00:27:24.503 ************************************ 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 bdev_null0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.503 [2024-05-15 00:09:25.084894] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:24.503 [2024-05-15 00:09:25.085131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.503 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.503 { 00:27:24.503 "params": { 00:27:24.503 "name": "Nvme$subsystem", 00:27:24.503 "trtype": "$TEST_TRANSPORT", 00:27:24.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.503 "adrfam": "ipv4", 00:27:24.503 "trsvcid": "$NVMF_PORT", 00:27:24.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.503 "hdgst": ${hdgst:-false}, 00:27:24.503 "ddgst": ${ddgst:-false} 00:27:24.503 }, 00:27:24.503 "method": "bdev_nvme_attach_controller" 00:27:24.503 } 00:27:24.503 EOF 00:27:24.503 )") 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:24.762 "params": { 00:27:24.762 "name": "Nvme0", 00:27:24.762 "trtype": "tcp", 00:27:24.762 "traddr": "10.0.0.2", 00:27:24.762 "adrfam": "ipv4", 00:27:24.762 "trsvcid": "4420", 00:27:24.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.762 "hdgst": false, 00:27:24.762 "ddgst": false 00:27:24.762 }, 00:27:24.762 "method": "bdev_nvme_attach_controller" 00:27:24.762 }' 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:24.762 00:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.021 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:25.021 fio-3.35 00:27:25.021 Starting 1 thread 00:27:25.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.233 00:27:37.233 filename0: (groupid=0, jobs=1): err= 0: pid=3747870: Wed May 15 00:09:36 2024 00:27:37.233 read: IOPS=184, BW=740KiB/s (758kB/s)(7424KiB/10035msec) 00:27:37.233 slat (nsec): min=5638, max=26429, avg=5875.63, stdev=942.05 00:27:37.233 clat (usec): min=1321, max=44639, avg=21610.27, stdev=20168.06 00:27:37.233 lat (usec): min=1327, max=44665, avg=21616.15, stdev=20168.02 00:27:37.233 clat percentiles (usec): 00:27:37.233 | 1.00th=[ 1336], 5.00th=[ 1336], 10.00th=[ 1352], 20.00th=[ 1352], 00:27:37.233 | 30.00th=[ 1352], 40.00th=[ 1369], 50.00th=[41681], 60.00th=[41681], 00:27:37.233 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:37.233 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:27:37.233 | 99.99th=[44827] 00:27:37.233 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=740.80, stdev=34.86, samples=20 00:27:37.233 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:27:37.233 lat (msec) : 2=49.78%, 50=50.22% 00:27:37.233 cpu : usr=85.90%, sys=13.85%, ctx=12, majf=0, minf=207 00:27:37.233 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.233 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.233 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:37.233 00:27:37.233 Run status group 0 (all jobs): 00:27:37.233 READ: bw=740KiB/s (758kB/s), 740KiB/s-740KiB/s (758kB/s-758kB/s), io=7424KiB (7602kB), run=10035-10035msec 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:27:37.233 real 0m11.207s 00:27:37.233 user 0m17.620s 00:27:37.233 sys 0m1.712s 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 ************************************ 00:27:37.233 END TEST fio_dif_1_default 00:27:37.233 ************************************ 00:27:37.233 00:09:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:37.233 00:09:36 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:37.233 00:09:36 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 ************************************ 00:27:37.233 START TEST fio_dif_1_multi_subsystems 00:27:37.233 ************************************ 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 bdev_null0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 [2024-05-15 00:09:36.381338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 bdev_null1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.233 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.233 { 00:27:37.233 "params": { 00:27:37.233 "name": "Nvme$subsystem", 00:27:37.233 "trtype": "$TEST_TRANSPORT", 00:27:37.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.233 "adrfam": "ipv4", 00:27:37.233 "trsvcid": "$NVMF_PORT", 00:27:37.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.233 "hdgst": ${hdgst:-false}, 00:27:37.233 "ddgst": ${ddgst:-false} 00:27:37.233 }, 00:27:37.233 "method": "bdev_nvme_attach_controller" 00:27:37.234 } 00:27:37.234 EOF 00:27:37.234 )") 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.234 { 00:27:37.234 "params": { 00:27:37.234 "name": "Nvme$subsystem", 00:27:37.234 "trtype": "$TEST_TRANSPORT", 00:27:37.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.234 "adrfam": "ipv4", 00:27:37.234 "trsvcid": "$NVMF_PORT", 00:27:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.234 "hdgst": ${hdgst:-false}, 00:27:37.234 "ddgst": ${ddgst:-false} 00:27:37.234 }, 00:27:37.234 "method": "bdev_nvme_attach_controller" 00:27:37.234 } 00:27:37.234 EOF 00:27:37.234 )") 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:37.234 "params": { 00:27:37.234 "name": "Nvme0", 00:27:37.234 "trtype": "tcp", 00:27:37.234 "traddr": "10.0.0.2", 00:27:37.234 "adrfam": "ipv4", 00:27:37.234 "trsvcid": "4420", 00:27:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:37.234 "hdgst": false, 00:27:37.234 "ddgst": false 00:27:37.234 }, 00:27:37.234 "method": "bdev_nvme_attach_controller" 00:27:37.234 },{ 00:27:37.234 "params": { 00:27:37.234 "name": "Nvme1", 00:27:37.234 "trtype": "tcp", 00:27:37.234 "traddr": "10.0.0.2", 00:27:37.234 "adrfam": "ipv4", 00:27:37.234 "trsvcid": "4420", 00:27:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.234 "hdgst": false, 00:27:37.234 "ddgst": false 00:27:37.234 }, 00:27:37.234 "method": "bdev_nvme_attach_controller" 00:27:37.234 }' 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:37.234 00:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.234 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:37.234 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:37.234 fio-3.35 00:27:37.234 Starting 2 threads 00:27:37.234 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.216 00:27:47.216 filename0: (groupid=0, jobs=1): err= 0: pid=3749889: Wed May 15 00:09:47 2024 00:27:47.216 read: IOPS=183, BW=734KiB/s (752kB/s)(7360KiB/10025msec) 00:27:47.216 slat (nsec): min=5772, max=30087, avg=6825.22, stdev=1947.29 00:27:47.216 clat (usec): min=607, max=42659, avg=21771.95, stdev=20210.44 00:27:47.216 lat (usec): min=613, max=42665, avg=21778.78, stdev=20209.85 00:27:47.216 clat percentiles (usec): 00:27:47.216 | 1.00th=[ 627], 5.00th=[ 1254], 10.00th=[ 1336], 20.00th=[ 1352], 00:27:47.216 | 30.00th=[ 1352], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41681], 00:27:47.216 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:27:47.216 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:47.216 | 99.99th=[42730] 00:27:47.216 bw ( KiB/s): min= 608, max= 768, per=49.88%, avg=734.40, stdev=43.40, samples=20 00:27:47.216 iops : min= 152, max= 192, avg=183.60, stdev=10.85, samples=20 00:27:47.216 lat (usec) : 750=4.35% 00:27:47.216 lat (msec) : 2=45.00%, 50=50.65% 00:27:47.216 cpu : usr=93.14%, sys=6.60%, ctx=10, majf=0, minf=106 00:27:47.216 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.216 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.216 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:47.216 filename1: (groupid=0, jobs=1): err= 0: pid=3749890: Wed May 15 00:09:47 2024 00:27:47.216 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10015msec) 00:27:47.216 slat (nsec): min=5774, max=27439, avg=6831.82, stdev=1957.16 00:27:47.216 clat (usec): min=1327, max=44109, avg=21658.01, stdev=20201.66 00:27:47.216 lat (usec): min=1333, max=44137, avg=21664.84, stdev=20201.06 00:27:47.216 clat percentiles (usec): 00:27:47.216 | 1.00th=[ 1336], 5.00th=[ 1336], 10.00th=[ 1352], 20.00th=[ 1352], 00:27:47.216 | 30.00th=[ 1352], 40.00th=[ 1369], 50.00th=[41681], 60.00th=[41681], 00:27:47.216 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:27:47.216 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:27:47.216 | 99.99th=[44303] 00:27:47.216 bw ( KiB/s): min= 672, max= 768, per=50.08%, avg=737.60, stdev=33.60, samples=20 00:27:47.216 iops : min= 168, max= 192, avg=184.40, stdev= 8.40, samples=20 00:27:47.216 lat (msec) : 2=49.78%, 50=50.22% 00:27:47.216 cpu : usr=93.58%, sys=6.17%, ctx=9, majf=0, minf=135 00:27:47.216 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.216 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.216 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:47.216 00:27:47.216 Run status group 0 (all jobs): 00:27:47.216 READ: bw=1472KiB/s (1507kB/s), 734KiB/s-738KiB/s (752kB/s-756kB/s), io=14.4MiB (15.1MB), run=10015-10025msec 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.216 00:27:47.216 real 0m11.400s 00:27:47.216 user 0m27.685s 00:27:47.216 sys 0m1.645s 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.216 00:09:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 ************************************ 00:27:47.216 END TEST fio_dif_1_multi_subsystems 00:27:47.216 ************************************ 00:27:47.216 00:09:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:47.217 00:09:47 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:47.217 00:09:47 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.217 00:09:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:47.476 ************************************ 00:27:47.476 START TEST fio_dif_rand_params 00:27:47.476 ************************************ 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.476 bdev_null0 00:27:47.476 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.477 [2024-05-15 00:09:47.872147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.477 { 00:27:47.477 "params": { 00:27:47.477 "name": "Nvme$subsystem", 00:27:47.477 "trtype": "$TEST_TRANSPORT", 00:27:47.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.477 "adrfam": "ipv4", 00:27:47.477 "trsvcid": "$NVMF_PORT", 00:27:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.477 "hdgst": ${hdgst:-false}, 00:27:47.477 "ddgst": ${ddgst:-false} 00:27:47.477 }, 00:27:47.477 "method": "bdev_nvme_attach_controller" 00:27:47.477 } 00:27:47.477 EOF 00:27:47.477 )") 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:47.477 "params": { 00:27:47.477 "name": "Nvme0", 00:27:47.477 "trtype": "tcp", 00:27:47.477 "traddr": "10.0.0.2", 00:27:47.477 "adrfam": "ipv4", 00:27:47.477 "trsvcid": "4420", 00:27:47.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.477 "hdgst": false, 00:27:47.477 "ddgst": false 00:27:47.477 }, 00:27:47.477 "method": "bdev_nvme_attach_controller" 00:27:47.477 }' 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:47.477 00:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.736 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:47.736 ... 00:27:47.736 fio-3.35 00:27:47.736 Starting 3 threads 00:27:47.736 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.311 00:27:54.311 filename0: (groupid=0, jobs=1): err= 0: pid=3751898: Wed May 15 00:09:53 2024 00:27:54.311 read: IOPS=256, BW=32.1MiB/s (33.6MB/s)(162MiB/5047msec) 00:27:54.311 slat (nsec): min=6008, max=85892, avg=13125.46, stdev=6400.08 00:27:54.311 clat (usec): min=4443, max=92879, avg=11673.07, stdev=12155.76 00:27:54.311 lat (usec): min=4451, max=92900, avg=11686.20, stdev=12156.20 00:27:54.311 clat percentiles (usec): 00:27:54.311 | 1.00th=[ 4817], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6587], 00:27:54.311 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 8717], 00:27:54.311 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[12649], 95.00th=[49546], 00:27:54.311 | 99.00th=[53216], 99.50th=[55837], 99.90th=[92799], 99.95th=[92799], 00:27:54.311 | 99.99th=[92799] 00:27:54.311 bw ( KiB/s): min=23296, max=43776, per=33.06%, avg=33075.20, stdev=6256.51, samples=10 00:27:54.311 iops : min= 182, max= 342, avg=258.40, stdev=48.88, samples=10 00:27:54.311 lat (msec) : 10=76.45%, 20=15.37%, 50=3.78%, 100=4.40% 00:27:54.311 cpu : usr=93.82%, sys=5.75%, ctx=12, majf=0, minf=121 00:27:54.311 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 issued rwts: total=1295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.311 filename0: (groupid=0, jobs=1): err= 0: pid=3751899: Wed May 15 00:09:53 2024 00:27:54.311 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(153MiB/5025msec) 00:27:54.311 slat (nsec): min=5911, max=80461, avg=11121.07, stdev=5471.65 00:27:54.311 clat (usec): min=4148, max=98416, avg=12322.06, stdev=13966.94 00:27:54.311 lat (usec): min=4155, max=98438, avg=12333.19, stdev=13967.30 00:27:54.311 clat percentiles (usec): 00:27:54.311 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 6390], 00:27:54.311 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8717], 00:27:54.311 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[13960], 95.00th=[50594], 00:27:54.311 | 99.00th=[55837], 99.50th=[92799], 99.90th=[98042], 99.95th=[98042], 00:27:54.311 | 99.99th=[98042] 00:27:54.311 bw ( KiB/s): min=20736, max=41728, per=31.19%, avg=31206.40, stdev=7572.53, samples=10 00:27:54.311 iops : min= 162, max= 326, avg=243.80, stdev=59.16, samples=10 00:27:54.311 lat (msec) : 10=76.84%, 20=13.91%, 50=2.45%, 100=6.79% 00:27:54.311 cpu : usr=92.89%, sys=6.71%, ctx=6, majf=0, minf=196 00:27:54.311 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 issued rwts: total=1222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.311 filename0: (groupid=0, jobs=1): err= 0: pid=3751900: Wed May 15 00:09:53 2024 00:27:54.311 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(179MiB/5004msec) 00:27:54.311 slat (nsec): min=5931, max=32030, avg=11331.21, stdev=5366.61 00:27:54.311 clat (msec): min=4, max=100, avg=10.50, stdev=11.12 00:27:54.311 lat (msec): min=4, max=100, avg=10.51, stdev=11.12 00:27:54.311 clat percentiles (msec): 00:27:54.311 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:27:54.311 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:27:54.311 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 12], 95.00th=[ 48], 00:27:54.311 | 99.00th=[ 55], 99.50th=[ 56], 99.90th=[ 97], 99.95th=[ 101], 00:27:54.311 | 99.99th=[ 101] 00:27:54.311 bw ( KiB/s): min=26880, max=47616, per=36.46%, avg=36480.00, stdev=7520.23, samples=10 00:27:54.311 iops : min= 210, max= 372, avg=285.00, stdev=58.75, samples=10 00:27:54.311 lat (msec) : 10=83.33%, 20=10.92%, 50=2.52%, 100=3.15%, 250=0.07% 00:27:54.311 cpu : usr=92.40%, sys=7.14%, ctx=9, majf=0, minf=90 00:27:54.311 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.311 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.311 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.311 00:27:54.311 Run status group 0 (all jobs): 00:27:54.311 READ: bw=97.7MiB/s (102MB/s), 30.4MiB/s-35.7MiB/s (31.9MB/s-37.4MB/s), io=493MiB (517MB), run=5004-5047msec 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 bdev_null0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 [2024-05-15 00:09:54.111099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.311 bdev_null1 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.311 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 bdev_null2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.312 { 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme$subsystem", 00:27:54.312 "trtype": "$TEST_TRANSPORT", 00:27:54.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "$NVMF_PORT", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.312 "hdgst": ${hdgst:-false}, 00:27:54.312 "ddgst": ${ddgst:-false} 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 } 00:27:54.312 EOF 00:27:54.312 )") 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.312 { 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme$subsystem", 00:27:54.312 "trtype": "$TEST_TRANSPORT", 00:27:54.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "$NVMF_PORT", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.312 "hdgst": ${hdgst:-false}, 00:27:54.312 "ddgst": ${ddgst:-false} 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 } 00:27:54.312 EOF 00:27:54.312 )") 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.312 { 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme$subsystem", 00:27:54.312 "trtype": "$TEST_TRANSPORT", 00:27:54.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "$NVMF_PORT", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.312 "hdgst": ${hdgst:-false}, 00:27:54.312 "ddgst": ${ddgst:-false} 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 } 00:27:54.312 EOF 00:27:54.312 )") 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme0", 00:27:54.312 "trtype": "tcp", 00:27:54.312 "traddr": "10.0.0.2", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "4420", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.312 "hdgst": false, 00:27:54.312 "ddgst": false 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 },{ 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme1", 00:27:54.312 "trtype": "tcp", 00:27:54.312 "traddr": "10.0.0.2", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "4420", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.312 "hdgst": false, 00:27:54.312 "ddgst": false 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 },{ 00:27:54.312 "params": { 00:27:54.312 "name": "Nvme2", 00:27:54.312 "trtype": "tcp", 00:27:54.312 "traddr": "10.0.0.2", 00:27:54.312 "adrfam": "ipv4", 00:27:54.312 "trsvcid": "4420", 00:27:54.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.312 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:54.312 "hdgst": false, 00:27:54.312 "ddgst": false 00:27:54.312 }, 00:27:54.312 "method": "bdev_nvme_attach_controller" 00:27:54.312 }' 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:54.312 00:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.312 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:54.312 ... 00:27:54.313 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:54.313 ... 00:27:54.313 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:54.313 ... 00:27:54.313 fio-3.35 00:27:54.313 Starting 24 threads 00:27:54.313 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.511 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753163: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=542, BW=2170KiB/s (2222kB/s)(21.2MiB/10020msec) 00:28:06.511 slat (nsec): min=6702, max=39589, avg=13330.37, stdev=5314.21 00:28:06.511 clat (usec): min=4833, max=74169, avg=29384.29, stdev=6782.17 00:28:06.511 lat (usec): min=4841, max=74191, avg=29397.62, stdev=6783.07 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[11994], 5.00th=[21365], 10.00th=[24249], 20.00th=[25035], 00:28:06.511 | 30.00th=[25297], 40.00th=[25822], 50.00th=[27657], 60.00th=[30802], 00:28:06.511 | 70.00th=[32375], 80.00th=[34341], 90.00th=[37487], 95.00th=[40633], 00:28:06.511 | 99.00th=[47449], 99.50th=[49546], 99.90th=[73925], 99.95th=[73925], 00:28:06.511 | 99.99th=[73925] 00:28:06.511 bw ( KiB/s): min= 1920, max= 2440, per=3.76%, avg=2171.00, stdev=167.35, samples=20 00:28:06.511 iops : min= 480, max= 610, avg=542.60, stdev=41.85, samples=20 00:28:06.511 lat (msec) : 10=0.86%, 20=3.59%, 50=95.07%, 100=0.48% 00:28:06.511 cpu : usr=96.65%, sys=2.94%, ctx=28, majf=0, minf=91 00:28:06.511 IO depths : 1=2.1%, 2=4.5%, 4=15.5%, 8=66.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 issued rwts: total=5435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753164: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.3MiB/10011msec) 00:28:06.511 slat (nsec): min=6396, max=41919, avg=14710.60, stdev=5776.61 00:28:06.511 clat (usec): min=10439, max=60758, avg=26805.92, stdev=5406.42 00:28:06.511 lat (usec): min=10451, max=60776, avg=26820.63, stdev=5406.66 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[13566], 5.00th=[18482], 10.00th=[22414], 20.00th=[24249], 00:28:06.511 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:28:06.511 | 70.00th=[26870], 80.00th=[31327], 90.00th=[33817], 95.00th=[36963], 00:28:06.511 | 99.00th=[41681], 99.50th=[44827], 99.90th=[53216], 99.95th=[60556], 00:28:06.511 | 99.99th=[60556] 00:28:06.511 bw ( KiB/s): min= 2144, max= 2560, per=4.13%, avg=2386.68, stdev=108.76, samples=19 00:28:06.511 iops : min= 536, max= 640, avg=596.53, stdev=27.18, samples=19 00:28:06.511 lat (msec) : 20=7.24%, 50=92.49%, 100=0.27% 00:28:06.511 cpu : usr=96.93%, sys=2.66%, ctx=19, majf=0, minf=70 00:28:06.511 IO depths : 1=1.4%, 2=2.9%, 4=11.0%, 8=73.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 issued rwts: total=5953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753166: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10004msec) 00:28:06.511 slat (nsec): min=6352, max=46607, avg=14661.21, stdev=5895.44 00:28:06.511 clat (usec): min=10280, max=63284, avg=27782.37, stdev=5466.87 00:28:06.511 lat (usec): min=10288, max=63301, avg=27797.04, stdev=5466.55 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[13566], 5.00th=[19792], 10.00th=[23462], 20.00th=[24511], 00:28:06.511 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:28:06.511 | 70.00th=[30802], 80.00th=[32637], 90.00th=[35390], 95.00th=[37487], 00:28:06.511 | 99.00th=[40633], 99.50th=[43254], 99.90th=[49021], 99.95th=[63177], 00:28:06.511 | 99.99th=[63177] 00:28:06.511 bw ( KiB/s): min= 1848, max= 2480, per=3.96%, avg=2289.68, stdev=155.11, samples=19 00:28:06.511 iops : min= 462, max= 620, avg=572.26, stdev=38.76, samples=19 00:28:06.511 lat (msec) : 20=5.24%, 50=94.67%, 100=0.09% 00:28:06.511 cpu : usr=96.88%, sys=2.71%, ctx=17, majf=0, minf=49 00:28:06.511 IO depths : 1=1.1%, 2=2.4%, 4=12.0%, 8=72.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753167: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=598, BW=2392KiB/s (2449kB/s)(23.4MiB/10005msec) 00:28:06.511 slat (nsec): min=5007, max=40194, avg=12541.71, stdev=5560.57 00:28:06.511 clat (usec): min=5909, max=49432, avg=26685.38, stdev=4583.15 00:28:06.511 lat (usec): min=5924, max=49439, avg=26697.92, stdev=4582.27 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[14877], 5.00th=[23200], 10.00th=[23725], 20.00th=[24511], 00:28:06.511 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:28:06.511 | 70.00th=[26346], 80.00th=[28967], 90.00th=[33817], 95.00th=[35914], 00:28:06.511 | 99.00th=[41157], 99.50th=[45351], 99.90th=[48497], 99.95th=[48497], 00:28:06.511 | 99.99th=[49546] 00:28:06.511 bw ( KiB/s): min= 2139, max= 2560, per=4.10%, avg=2372.11, stdev=131.55, samples=19 00:28:06.511 iops : min= 534, max= 640, avg=592.79, stdev=33.13, samples=19 00:28:06.511 lat (msec) : 10=0.28%, 20=2.14%, 50=97.58% 00:28:06.511 cpu : usr=96.54%, sys=3.00%, ctx=18, majf=0, minf=37 00:28:06.511 IO depths : 1=0.3%, 2=0.8%, 4=7.7%, 8=77.2%, 16=14.1%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=90.4%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 issued rwts: total=5983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753168: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=633, BW=2535KiB/s (2596kB/s)(24.8MiB/10023msec) 00:28:06.511 slat (nsec): min=6611, max=42006, avg=13028.61, stdev=5176.62 00:28:06.511 clat (usec): min=4540, max=45930, avg=25130.72, stdev=3990.17 00:28:06.511 lat (usec): min=4555, max=45942, avg=25143.75, stdev=3991.05 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[10945], 5.00th=[18220], 10.00th=[22938], 20.00th=[23987], 00:28:06.511 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:28:06.511 | 70.00th=[25560], 80.00th=[26084], 90.00th=[29492], 95.00th=[32900], 00:28:06.511 | 99.00th=[36963], 99.50th=[38011], 99.90th=[42730], 99.95th=[45876], 00:28:06.511 | 99.99th=[45876] 00:28:06.511 bw ( KiB/s): min= 2299, max= 2922, per=4.39%, avg=2535.65, stdev=142.73, samples=20 00:28:06.511 iops : min= 574, max= 730, avg=633.80, stdev=35.67, samples=20 00:28:06.511 lat (msec) : 10=0.82%, 20=6.53%, 50=92.65% 00:28:06.511 cpu : usr=96.88%, sys=2.72%, ctx=18, majf=0, minf=47 00:28:06.511 IO depths : 1=4.1%, 2=8.4%, 4=19.2%, 8=59.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 issued rwts: total=6352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.511 filename0: (groupid=0, jobs=1): err= 0: pid=3753169: Wed May 15 00:10:05 2024 00:28:06.511 read: IOPS=583, BW=2336KiB/s (2392kB/s)(22.8MiB/10017msec) 00:28:06.511 slat (nsec): min=6650, max=44680, avg=14728.95, stdev=5731.58 00:28:06.511 clat (usec): min=10577, max=50087, avg=27297.29, stdev=5595.19 00:28:06.511 lat (usec): min=10590, max=50094, avg=27312.02, stdev=5596.09 00:28:06.511 clat percentiles (usec): 00:28:06.511 | 1.00th=[14484], 5.00th=[17957], 10.00th=[22938], 20.00th=[24511], 00:28:06.511 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:28:06.511 | 70.00th=[30016], 80.00th=[32113], 90.00th=[34866], 95.00th=[37487], 00:28:06.511 | 99.00th=[42730], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:28:06.511 | 99.99th=[50070] 00:28:06.511 bw ( KiB/s): min= 2096, max= 2522, per=4.03%, avg=2328.58, stdev=125.89, samples=19 00:28:06.511 iops : min= 524, max= 630, avg=582.00, stdev=31.50, samples=19 00:28:06.511 lat (msec) : 20=7.74%, 50=92.08%, 100=0.17% 00:28:06.511 cpu : usr=96.71%, sys=2.88%, ctx=14, majf=0, minf=78 00:28:06.511 IO depths : 1=1.8%, 2=3.8%, 4=12.4%, 8=70.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:06.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.511 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=5849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename0: (groupid=0, jobs=1): err= 0: pid=3753170: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=615, BW=2463KiB/s (2522kB/s)(24.1MiB/10017msec) 00:28:06.512 slat (nsec): min=6607, max=46077, avg=15222.30, stdev=5648.20 00:28:06.512 clat (usec): min=12256, max=53847, avg=25867.73, stdev=4291.46 00:28:06.512 lat (usec): min=12264, max=53864, avg=25882.95, stdev=4292.28 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[13960], 5.00th=[20317], 10.00th=[23725], 20.00th=[24249], 00:28:06.512 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:28:06.512 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31589], 95.00th=[34341], 00:28:06.512 | 99.00th=[42206], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:28:06.512 | 99.99th=[53740] 00:28:06.512 bw ( KiB/s): min= 2128, max= 2714, per=4.28%, avg=2474.63, stdev=137.34, samples=19 00:28:06.512 iops : min= 532, max= 678, avg=618.53, stdev=34.31, samples=19 00:28:06.512 lat (msec) : 20=4.93%, 50=95.02%, 100=0.05% 00:28:06.512 cpu : usr=97.00%, sys=2.59%, ctx=15, majf=0, minf=54 00:28:06.512 IO depths : 1=2.8%, 2=6.6%, 4=17.6%, 8=63.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=6168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename0: (groupid=0, jobs=1): err= 0: pid=3753171: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=620, BW=2480KiB/s (2540kB/s)(24.3MiB/10022msec) 00:28:06.512 slat (nsec): min=6559, max=41916, avg=14898.71, stdev=5522.19 00:28:06.512 clat (usec): min=4461, max=49439, avg=25687.63, stdev=4834.47 00:28:06.512 lat (usec): min=4470, max=49458, avg=25702.53, stdev=4835.60 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[11994], 5.00th=[16909], 10.00th=[22152], 20.00th=[23987], 00:28:06.512 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.512 | 70.00th=[25822], 80.00th=[26608], 90.00th=[32637], 95.00th=[34866], 00:28:06.512 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41157], 99.95th=[49546], 00:28:06.512 | 99.99th=[49546] 00:28:06.512 bw ( KiB/s): min= 2171, max= 2816, per=4.29%, avg=2480.65, stdev=149.46, samples=20 00:28:06.512 iops : min= 542, max= 704, avg=620.00, stdev=37.42, samples=20 00:28:06.512 lat (msec) : 10=0.77%, 20=8.03%, 50=91.20% 00:28:06.512 cpu : usr=96.77%, sys=2.82%, ctx=17, majf=0, minf=52 00:28:06.512 IO depths : 1=2.5%, 2=5.1%, 4=13.9%, 8=68.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=6214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753172: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=605, BW=2422KiB/s (2480kB/s)(23.7MiB/10003msec) 00:28:06.512 slat (nsec): min=6213, max=45595, avg=15646.73, stdev=6060.04 00:28:06.512 clat (usec): min=10607, max=61174, avg=26340.96, stdev=4109.44 00:28:06.512 lat (usec): min=10621, max=61193, avg=26356.61, stdev=4109.43 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23725], 20.00th=[24511], 00:28:06.512 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.512 | 70.00th=[26084], 80.00th=[26870], 90.00th=[31851], 95.00th=[34866], 00:28:06.512 | 99.00th=[41157], 99.50th=[42730], 99.90th=[53216], 99.95th=[61080], 00:28:06.512 | 99.99th=[61080] 00:28:06.512 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2414.53, stdev=106.62, samples=19 00:28:06.512 iops : min= 544, max= 640, avg=603.47, stdev=26.62, samples=19 00:28:06.512 lat (msec) : 20=2.53%, 50=97.21%, 100=0.26% 00:28:06.512 cpu : usr=96.59%, sys=2.95%, ctx=17, majf=0, minf=41 00:28:06.512 IO depths : 1=0.1%, 2=0.5%, 4=6.8%, 8=79.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=89.5%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=6057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753173: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=577, BW=2311KiB/s (2367kB/s)(22.6MiB/10009msec) 00:28:06.512 slat (nsec): min=6320, max=47893, avg=14903.72, stdev=6108.99 00:28:06.512 clat (usec): min=6245, max=54068, avg=27608.48, stdev=5678.30 00:28:06.512 lat (usec): min=6264, max=54085, avg=27623.39, stdev=5677.98 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[13829], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:28:06.512 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:28:06.512 | 70.00th=[28967], 80.00th=[32113], 90.00th=[35390], 95.00th=[38011], 00:28:06.512 | 99.00th=[45876], 99.50th=[49546], 99.90th=[54264], 99.95th=[54264], 00:28:06.512 | 99.99th=[54264] 00:28:06.512 bw ( KiB/s): min= 1968, max= 2432, per=3.98%, avg=2300.21, stdev=102.65, samples=19 00:28:06.512 iops : min= 492, max= 608, avg=574.89, stdev=25.59, samples=19 00:28:06.512 lat (msec) : 10=0.07%, 20=4.50%, 50=95.09%, 100=0.35% 00:28:06.512 cpu : usr=96.53%, sys=3.05%, ctx=19, majf=0, minf=51 00:28:06.512 IO depths : 1=0.3%, 2=0.7%, 4=7.6%, 8=77.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=5783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753174: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=609, BW=2439KiB/s (2498kB/s)(23.9MiB/10019msec) 00:28:06.512 slat (nsec): min=6555, max=39551, avg=14835.68, stdev=5735.38 00:28:06.512 clat (usec): min=10435, max=51536, avg=26142.85, stdev=4594.81 00:28:06.512 lat (usec): min=10449, max=51557, avg=26157.68, stdev=4594.57 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[14091], 5.00th=[19268], 10.00th=[23462], 20.00th=[24249], 00:28:06.512 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.512 | 70.00th=[25822], 80.00th=[26870], 90.00th=[32375], 95.00th=[34866], 00:28:06.512 | 99.00th=[41681], 99.50th=[42730], 99.90th=[48497], 99.95th=[51643], 00:28:06.512 | 99.99th=[51643] 00:28:06.512 bw ( KiB/s): min= 2256, max= 2704, per=4.23%, avg=2445.16, stdev=121.15, samples=19 00:28:06.512 iops : min= 564, max= 676, avg=611.16, stdev=30.35, samples=19 00:28:06.512 lat (msec) : 20=5.71%, 50=94.22%, 100=0.07% 00:28:06.512 cpu : usr=96.90%, sys=2.66%, ctx=15, majf=0, minf=51 00:28:06.512 IO depths : 1=1.3%, 2=2.6%, 4=10.6%, 8=73.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=6109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753176: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=606, BW=2425KiB/s (2483kB/s)(23.7MiB/10019msec) 00:28:06.512 slat (nsec): min=6607, max=45637, avg=15433.11, stdev=5770.42 00:28:06.512 clat (usec): min=13290, max=49054, avg=26291.15, stdev=4119.30 00:28:06.512 lat (usec): min=13301, max=49063, avg=26306.58, stdev=4119.12 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[16057], 5.00th=[22414], 10.00th=[23725], 20.00th=[24249], 00:28:06.512 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.512 | 70.00th=[25822], 80.00th=[26608], 90.00th=[32637], 95.00th=[34866], 00:28:06.512 | 99.00th=[39584], 99.50th=[40109], 99.90th=[49021], 99.95th=[49021], 00:28:06.512 | 99.99th=[49021] 00:28:06.512 bw ( KiB/s): min= 2176, max= 2608, per=4.20%, avg=2425.84, stdev=102.73, samples=19 00:28:06.512 iops : min= 544, max= 652, avg=606.32, stdev=25.69, samples=19 00:28:06.512 lat (msec) : 20=3.84%, 50=96.16% 00:28:06.512 cpu : usr=97.05%, sys=2.52%, ctx=20, majf=0, minf=44 00:28:06.512 IO depths : 1=1.4%, 2=3.0%, 4=12.1%, 8=72.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=90.6%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=6074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753177: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=576, BW=2305KiB/s (2361kB/s)(22.5MiB/10004msec) 00:28:06.512 slat (nsec): min=4856, max=43365, avg=13681.28, stdev=5627.59 00:28:06.512 clat (usec): min=6553, max=65312, avg=27677.47, stdev=5335.99 00:28:06.512 lat (usec): min=6561, max=65330, avg=27691.15, stdev=5335.42 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[15270], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:28:06.512 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:28:06.512 | 70.00th=[29754], 80.00th=[32375], 90.00th=[34866], 95.00th=[37487], 00:28:06.512 | 99.00th=[41157], 99.50th=[45351], 99.90th=[57934], 99.95th=[65274], 00:28:06.512 | 99.99th=[65274] 00:28:06.512 bw ( KiB/s): min= 1923, max= 2560, per=3.99%, avg=2307.63, stdev=156.61, samples=19 00:28:06.512 iops : min= 480, max= 640, avg=576.63, stdev=39.21, samples=19 00:28:06.512 lat (msec) : 10=0.28%, 20=4.08%, 50=95.37%, 100=0.28% 00:28:06.512 cpu : usr=96.82%, sys=2.76%, ctx=20, majf=0, minf=64 00:28:06.512 IO depths : 1=1.2%, 2=2.7%, 4=10.7%, 8=73.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:06.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.512 issued rwts: total=5766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.512 filename1: (groupid=0, jobs=1): err= 0: pid=3753178: Wed May 15 00:10:05 2024 00:28:06.512 read: IOPS=597, BW=2390KiB/s (2448kB/s)(23.4MiB/10003msec) 00:28:06.512 slat (nsec): min=6510, max=44791, avg=13611.20, stdev=6000.32 00:28:06.512 clat (usec): min=11718, max=64026, avg=26704.55, stdev=4546.12 00:28:06.512 lat (usec): min=11725, max=64039, avg=26718.16, stdev=4545.81 00:28:06.512 clat percentiles (usec): 00:28:06.512 | 1.00th=[15664], 5.00th=[22938], 10.00th=[23987], 20.00th=[24511], 00:28:06.512 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:28:06.512 | 70.00th=[26346], 80.00th=[28443], 90.00th=[32900], 95.00th=[35914], 00:28:06.513 | 99.00th=[42730], 99.50th=[44827], 99.90th=[50070], 99.95th=[63701], 00:28:06.513 | 99.99th=[64226] 00:28:06.513 bw ( KiB/s): min= 2176, max= 2536, per=4.12%, avg=2382.16, stdev=102.18, samples=19 00:28:06.513 iops : min= 544, max= 634, avg=595.32, stdev=25.46, samples=19 00:28:06.513 lat (msec) : 20=3.09%, 50=96.79%, 100=0.12% 00:28:06.513 cpu : usr=96.41%, sys=3.11%, ctx=17, majf=0, minf=61 00:28:06.513 IO depths : 1=0.2%, 2=0.6%, 4=6.1%, 8=78.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.9%, 8=6.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename1: (groupid=0, jobs=1): err= 0: pid=3753179: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10019msec) 00:28:06.513 slat (nsec): min=6559, max=39065, avg=14301.61, stdev=5349.80 00:28:06.513 clat (usec): min=11389, max=52200, avg=25951.77, stdev=3868.49 00:28:06.513 lat (usec): min=11397, max=52237, avg=25966.07, stdev=3868.52 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[15795], 5.00th=[22414], 10.00th=[23725], 20.00th=[24249], 00:28:06.513 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.513 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31589], 95.00th=[34341], 00:28:06.513 | 99.00th=[38011], 99.50th=[42730], 99.90th=[49021], 99.95th=[49021], 00:28:06.513 | 99.99th=[52167] 00:28:06.513 bw ( KiB/s): min= 2176, max= 2560, per=4.24%, avg=2451.11, stdev=94.40, samples=19 00:28:06.513 iops : min= 544, max= 640, avg=612.63, stdev=23.67, samples=19 00:28:06.513 lat (msec) : 20=4.11%, 50=95.84%, 100=0.05% 00:28:06.513 cpu : usr=96.63%, sys=2.94%, ctx=12, majf=0, minf=59 00:28:06.513 IO depths : 1=1.0%, 2=2.1%, 4=9.6%, 8=75.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=90.0%, 8=4.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=6154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename1: (groupid=0, jobs=1): err= 0: pid=3753180: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=626, BW=2507KiB/s (2568kB/s)(24.5MiB/10023msec) 00:28:06.513 slat (nsec): min=6671, max=38865, avg=14715.06, stdev=5104.93 00:28:06.513 clat (usec): min=11906, max=49458, avg=25427.71, stdev=3000.53 00:28:06.513 lat (usec): min=11914, max=49477, avg=25442.43, stdev=3000.72 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[16712], 5.00th=[22938], 10.00th=[23725], 20.00th=[24249], 00:28:06.513 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:28:06.513 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[31327], 00:28:06.513 | 99.00th=[37487], 99.50th=[38536], 99.90th=[43254], 99.95th=[49546], 00:28:06.513 | 99.99th=[49546] 00:28:06.513 bw ( KiB/s): min= 2304, max= 2640, per=4.34%, avg=2505.32, stdev=85.02, samples=19 00:28:06.513 iops : min= 576, max= 660, avg=626.21, stdev=21.24, samples=19 00:28:06.513 lat (msec) : 20=3.22%, 50=96.78% 00:28:06.513 cpu : usr=96.74%, sys=2.84%, ctx=16, majf=0, minf=38 00:28:06.513 IO depths : 1=0.6%, 2=1.1%, 4=7.9%, 8=77.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.5%, 8=5.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=6283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753181: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=617, BW=2469KiB/s (2528kB/s)(24.2MiB/10019msec) 00:28:06.513 slat (nsec): min=6539, max=40911, avg=14513.78, stdev=6075.09 00:28:06.513 clat (usec): min=10980, max=48058, avg=25838.98, stdev=3486.40 00:28:06.513 lat (usec): min=10987, max=48066, avg=25853.50, stdev=3485.86 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[16057], 5.00th=[23200], 10.00th=[23725], 20.00th=[24511], 00:28:06.513 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.513 | 70.00th=[25822], 80.00th=[26084], 90.00th=[30278], 95.00th=[33817], 00:28:06.513 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41681], 99.95th=[43254], 00:28:06.513 | 99.99th=[47973] 00:28:06.513 bw ( KiB/s): min= 2256, max= 2560, per=4.27%, avg=2465.26, stdev=85.66, samples=19 00:28:06.513 iops : min= 564, max= 640, avg=616.21, stdev=21.33, samples=19 00:28:06.513 lat (msec) : 20=3.10%, 50=96.90% 00:28:06.513 cpu : usr=96.74%, sys=2.83%, ctx=14, majf=0, minf=50 00:28:06.513 IO depths : 1=0.5%, 2=1.2%, 4=6.7%, 8=77.9%, 16=13.7%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.7%, 8=6.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=6184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753182: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=613, BW=2455KiB/s (2514kB/s)(24.0MiB/10005msec) 00:28:06.513 slat (nsec): min=4649, max=49269, avg=15836.16, stdev=6048.42 00:28:06.513 clat (usec): min=5513, max=49981, avg=25989.01, stdev=4165.63 00:28:06.513 lat (usec): min=5520, max=49995, avg=26004.85, stdev=4165.24 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[15401], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:28:06.513 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.513 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31589], 95.00th=[34341], 00:28:06.513 | 99.00th=[40633], 99.50th=[42730], 99.90th=[49546], 99.95th=[49546], 00:28:06.513 | 99.99th=[50070] 00:28:06.513 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2435.47, stdev=105.71, samples=19 00:28:06.513 iops : min= 544, max= 640, avg=608.63, stdev=26.57, samples=19 00:28:06.513 lat (msec) : 10=0.49%, 20=2.88%, 50=96.63% 00:28:06.513 cpu : usr=96.62%, sys=2.96%, ctx=13, majf=0, minf=45 00:28:06.513 IO depths : 1=0.2%, 2=0.7%, 4=7.5%, 8=78.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.8%, 8=5.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=6140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753183: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=598, BW=2395KiB/s (2452kB/s)(23.4MiB/10013msec) 00:28:06.513 slat (nsec): min=3948, max=43069, avg=14960.20, stdev=6484.83 00:28:06.513 clat (usec): min=12606, max=47625, avg=26628.92, stdev=4278.55 00:28:06.513 lat (usec): min=12627, max=47633, avg=26643.88, stdev=4277.60 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23987], 20.00th=[24511], 00:28:06.513 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:06.513 | 70.00th=[26084], 80.00th=[28443], 90.00th=[32900], 95.00th=[35390], 00:28:06.513 | 99.00th=[40633], 99.50th=[43254], 99.90th=[46924], 99.95th=[47449], 00:28:06.513 | 99.99th=[47449] 00:28:06.513 bw ( KiB/s): min= 1904, max= 2560, per=4.13%, avg=2388.32, stdev=161.03, samples=19 00:28:06.513 iops : min= 476, max= 640, avg=596.95, stdev=40.20, samples=19 00:28:06.513 lat (msec) : 20=2.94%, 50=97.06% 00:28:06.513 cpu : usr=96.82%, sys=2.73%, ctx=14, majf=0, minf=52 00:28:06.513 IO depths : 1=0.6%, 2=1.2%, 4=7.9%, 8=77.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.7%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=5995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753184: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=588, BW=2353KiB/s (2410kB/s)(23.0MiB/10005msec) 00:28:06.513 slat (nsec): min=4773, max=44198, avg=14291.07, stdev=5879.12 00:28:06.513 clat (usec): min=5073, max=81997, avg=27117.96, stdev=5369.08 00:28:06.513 lat (usec): min=5084, max=82009, avg=27132.25, stdev=5368.71 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[14484], 5.00th=[22152], 10.00th=[23725], 20.00th=[24511], 00:28:06.513 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:28:06.513 | 70.00th=[26608], 80.00th=[31065], 90.00th=[33817], 95.00th=[36439], 00:28:06.513 | 99.00th=[42206], 99.50th=[45876], 99.90th=[65274], 99.95th=[82314], 00:28:06.513 | 99.99th=[82314] 00:28:06.513 bw ( KiB/s): min= 2128, max= 2459, per=4.05%, avg=2338.84, stdev=96.00, samples=19 00:28:06.513 iops : min= 532, max= 614, avg=584.47, stdev=23.97, samples=19 00:28:06.513 lat (msec) : 10=0.32%, 20=3.30%, 50=96.11%, 100=0.27% 00:28:06.513 cpu : usr=96.56%, sys=3.02%, ctx=19, majf=0, minf=41 00:28:06.513 IO depths : 1=0.5%, 2=1.0%, 4=8.1%, 8=77.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=5886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753185: Wed May 15 00:10:05 2024 00:28:06.513 read: IOPS=597, BW=2388KiB/s (2445kB/s)(23.4MiB/10013msec) 00:28:06.513 slat (nsec): min=6284, max=41711, avg=13960.89, stdev=6343.05 00:28:06.513 clat (usec): min=10738, max=73954, avg=26717.04, stdev=5140.69 00:28:06.513 lat (usec): min=10746, max=73970, avg=26731.00, stdev=5140.19 00:28:06.513 clat percentiles (usec): 00:28:06.513 | 1.00th=[14484], 5.00th=[21365], 10.00th=[23462], 20.00th=[24249], 00:28:06.513 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:28:06.513 | 70.00th=[26084], 80.00th=[29754], 90.00th=[33817], 95.00th=[36963], 00:28:06.513 | 99.00th=[41157], 99.50th=[49546], 99.90th=[57934], 99.95th=[73925], 00:28:06.513 | 99.99th=[73925] 00:28:06.513 bw ( KiB/s): min= 2080, max= 2560, per=4.14%, avg=2391.58, stdev=144.87, samples=19 00:28:06.513 iops : min= 520, max= 640, avg=597.68, stdev=36.32, samples=19 00:28:06.513 lat (msec) : 20=4.65%, 50=94.90%, 100=0.45% 00:28:06.513 cpu : usr=96.67%, sys=2.91%, ctx=14, majf=0, minf=51 00:28:06.513 IO depths : 1=0.8%, 2=1.8%, 4=8.5%, 8=75.5%, 16=13.4%, 32=0.0%, >=64=0.0% 00:28:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.513 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.513 filename2: (groupid=0, jobs=1): err= 0: pid=3753187: Wed May 15 00:10:05 2024 00:28:06.514 read: IOPS=630, BW=2522KiB/s (2583kB/s)(24.7MiB/10025msec) 00:28:06.514 slat (nsec): min=3022, max=38573, avg=12467.27, stdev=5165.83 00:28:06.514 clat (usec): min=3321, max=45641, avg=25272.80, stdev=5608.37 00:28:06.514 lat (usec): min=3330, max=45659, avg=25285.27, stdev=5609.95 00:28:06.514 clat percentiles (usec): 00:28:06.514 | 1.00th=[ 7898], 5.00th=[14353], 10.00th=[18744], 20.00th=[23725], 00:28:06.514 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:28:06.514 | 70.00th=[25822], 80.00th=[27132], 90.00th=[32900], 95.00th=[34866], 00:28:06.514 | 99.00th=[41681], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:28:06.514 | 99.99th=[45876] 00:28:06.514 bw ( KiB/s): min= 2176, max= 3088, per=4.37%, avg=2523.45, stdev=223.06, samples=20 00:28:06.514 iops : min= 544, max= 772, avg=630.70, stdev=55.80, samples=20 00:28:06.514 lat (msec) : 4=0.17%, 10=1.25%, 20=10.50%, 50=88.07% 00:28:06.514 cpu : usr=96.47%, sys=3.10%, ctx=24, majf=0, minf=49 00:28:06.514 IO depths : 1=1.4%, 2=2.9%, 4=9.7%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:06.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 issued rwts: total=6322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.514 filename2: (groupid=0, jobs=1): err= 0: pid=3753188: Wed May 15 00:10:05 2024 00:28:06.514 read: IOPS=642, BW=2568KiB/s (2630kB/s)(25.1MiB/10020msec) 00:28:06.514 slat (nsec): min=3999, max=59794, avg=12467.18, stdev=5225.30 00:28:06.514 clat (usec): min=4065, max=46861, avg=24826.79, stdev=4620.21 00:28:06.514 lat (usec): min=4072, max=46876, avg=24839.25, stdev=4621.40 00:28:06.514 clat percentiles (usec): 00:28:06.514 | 1.00th=[10159], 5.00th=[15795], 10.00th=[20579], 20.00th=[23987], 00:28:06.514 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:28:06.514 | 70.00th=[25560], 80.00th=[26084], 90.00th=[28705], 95.00th=[32375], 00:28:06.514 | 99.00th=[38536], 99.50th=[40109], 99.90th=[45876], 99.95th=[46924], 00:28:06.514 | 99.99th=[46924] 00:28:06.514 bw ( KiB/s): min= 2299, max= 3024, per=4.45%, avg=2570.05, stdev=175.43, samples=20 00:28:06.514 iops : min= 574, max= 756, avg=642.40, stdev=43.89, samples=20 00:28:06.514 lat (msec) : 10=0.84%, 20=8.33%, 50=90.83% 00:28:06.514 cpu : usr=96.50%, sys=3.10%, ctx=19, majf=0, minf=97 00:28:06.514 IO depths : 1=1.3%, 2=2.6%, 4=7.8%, 8=74.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:28:06.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 complete : 0=0.0%, 4=90.2%, 8=6.6%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 issued rwts: total=6434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.514 filename2: (groupid=0, jobs=1): err= 0: pid=3753189: Wed May 15 00:10:05 2024 00:28:06.514 read: IOPS=598, BW=2392KiB/s (2449kB/s)(23.4MiB/10003msec) 00:28:06.514 slat (nsec): min=6541, max=41891, avg=12805.13, stdev=5540.85 00:28:06.514 clat (usec): min=4613, max=71544, avg=26684.32, stdev=5027.21 00:28:06.514 lat (usec): min=4620, max=71557, avg=26697.12, stdev=5026.79 00:28:06.514 clat percentiles (usec): 00:28:06.514 | 1.00th=[13698], 5.00th=[22152], 10.00th=[23462], 20.00th=[24511], 00:28:06.514 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:28:06.514 | 70.00th=[26346], 80.00th=[29492], 90.00th=[33162], 95.00th=[36439], 00:28:06.514 | 99.00th=[40109], 99.50th=[42206], 99.90th=[65274], 99.95th=[71828], 00:28:06.514 | 99.99th=[71828] 00:28:06.514 bw ( KiB/s): min= 2176, max= 2506, per=4.10%, avg=2371.63, stdev=94.73, samples=19 00:28:06.514 iops : min= 544, max= 626, avg=592.68, stdev=23.58, samples=19 00:28:06.514 lat (msec) : 10=0.28%, 20=3.29%, 50=96.16%, 100=0.27% 00:28:06.514 cpu : usr=97.02%, sys=2.55%, ctx=16, majf=0, minf=63 00:28:06.514 IO depths : 1=0.7%, 2=1.6%, 4=8.1%, 8=76.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:28:06.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.514 issued rwts: total=5982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.514 00:28:06.514 Run status group 0 (all jobs): 00:28:06.514 READ: bw=56.4MiB/s (59.2MB/s), 2170KiB/s-2568KiB/s (2222kB/s-2630kB/s), io=566MiB (593MB), run=10003-10025msec 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 bdev_null0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 [2024-05-15 00:10:05.917549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.514 bdev_null1 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.514 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.515 { 00:28:06.515 "params": { 00:28:06.515 "name": "Nvme$subsystem", 00:28:06.515 "trtype": "$TEST_TRANSPORT", 00:28:06.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.515 "adrfam": "ipv4", 00:28:06.515 "trsvcid": "$NVMF_PORT", 00:28:06.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.515 "hdgst": ${hdgst:-false}, 00:28:06.515 "ddgst": ${ddgst:-false} 00:28:06.515 }, 00:28:06.515 "method": "bdev_nvme_attach_controller" 00:28:06.515 } 00:28:06.515 EOF 00:28:06.515 )") 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.515 { 00:28:06.515 "params": { 00:28:06.515 "name": "Nvme$subsystem", 00:28:06.515 "trtype": "$TEST_TRANSPORT", 00:28:06.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.515 "adrfam": "ipv4", 00:28:06.515 "trsvcid": "$NVMF_PORT", 00:28:06.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.515 "hdgst": ${hdgst:-false}, 00:28:06.515 "ddgst": ${ddgst:-false} 00:28:06.515 }, 00:28:06.515 "method": "bdev_nvme_attach_controller" 00:28:06.515 } 00:28:06.515 EOF 00:28:06.515 )") 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:06.515 "params": { 00:28:06.515 "name": "Nvme0", 00:28:06.515 "trtype": "tcp", 00:28:06.515 "traddr": "10.0.0.2", 00:28:06.515 "adrfam": "ipv4", 00:28:06.515 "trsvcid": "4420", 00:28:06.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.515 "hdgst": false, 00:28:06.515 "ddgst": false 00:28:06.515 }, 00:28:06.515 "method": "bdev_nvme_attach_controller" 00:28:06.515 },{ 00:28:06.515 "params": { 00:28:06.515 "name": "Nvme1", 00:28:06.515 "trtype": "tcp", 00:28:06.515 "traddr": "10.0.0.2", 00:28:06.515 "adrfam": "ipv4", 00:28:06.515 "trsvcid": "4420", 00:28:06.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.515 "hdgst": false, 00:28:06.515 "ddgst": false 00:28:06.515 }, 00:28:06.515 "method": "bdev_nvme_attach_controller" 00:28:06.515 }' 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:06.515 00:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:06.515 00:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:06.515 00:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:06.515 00:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:06.515 00:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.515 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:06.515 ... 00:28:06.515 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:06.515 ... 00:28:06.515 fio-3.35 00:28:06.515 Starting 4 threads 00:28:06.515 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.787 00:28:11.787 filename0: (groupid=0, jobs=1): err= 0: pid=3755130: Wed May 15 00:10:12 2024 00:28:11.787 read: IOPS=2648, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:28:11.787 slat (nsec): min=5850, max=55964, avg=12036.08, stdev=7079.32 00:28:11.787 clat (usec): min=1641, max=45875, avg=2990.79, stdev=1122.23 00:28:11.787 lat (usec): min=1647, max=45900, avg=3002.83, stdev=1122.05 00:28:11.787 clat percentiles (usec): 00:28:11.787 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:28:11.787 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2966], 00:28:11.787 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3687], 00:28:11.787 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[45876], 00:28:11.787 | 99.99th=[45876] 00:28:11.787 bw ( KiB/s): min=19488, max=21760, per=24.31%, avg=21176.89, stdev=704.60, samples=9 00:28:11.787 iops : min= 2436, max= 2720, avg=2647.11, stdev=88.07, samples=9 00:28:11.787 lat (msec) : 2=0.48%, 4=97.58%, 10=1.88%, 50=0.06% 00:28:11.787 cpu : usr=95.46%, sys=4.20%, ctx=7, majf=0, minf=65 00:28:11.787 IO depths : 1=0.1%, 2=0.8%, 4=65.5%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 issued rwts: total=13250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.787 filename0: (groupid=0, jobs=1): err= 0: pid=3755131: Wed May 15 00:10:12 2024 00:28:11.787 read: IOPS=2926, BW=22.9MiB/s (24.0MB/s)(114MiB/5002msec) 00:28:11.787 slat (nsec): min=5853, max=63553, avg=10447.28, stdev=5703.61 00:28:11.787 clat (usec): min=844, max=45006, avg=2706.59, stdev=1073.83 00:28:11.787 lat (usec): min=851, max=45028, avg=2717.04, stdev=1073.97 00:28:11.787 clat percentiles (usec): 00:28:11.787 | 1.00th=[ 1663], 5.00th=[ 1975], 10.00th=[ 2114], 20.00th=[ 2311], 00:28:11.787 | 30.00th=[ 2442], 40.00th=[ 2606], 50.00th=[ 2802], 60.00th=[ 2900], 00:28:11.787 | 70.00th=[ 2933], 80.00th=[ 2933], 90.00th=[ 3163], 95.00th=[ 3326], 00:28:11.787 | 99.00th=[ 3720], 99.50th=[ 3884], 99.90th=[ 4359], 99.95th=[44827], 00:28:11.787 | 99.99th=[44827] 00:28:11.787 bw ( KiB/s): min=21568, max=25952, per=27.11%, avg=23617.78, stdev=1747.69, samples=9 00:28:11.787 iops : min= 2696, max= 3244, avg=2952.22, stdev=218.46, samples=9 00:28:11.787 lat (usec) : 1000=0.01% 00:28:11.787 lat (msec) : 2=5.82%, 4=93.86%, 10=0.26%, 50=0.05% 00:28:11.787 cpu : usr=94.96%, sys=4.66%, ctx=8, majf=0, minf=96 00:28:11.787 IO depths : 1=0.1%, 2=2.4%, 4=65.5%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 issued rwts: total=14637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.787 filename1: (groupid=0, jobs=1): err= 0: pid=3755132: Wed May 15 00:10:12 2024 00:28:11.787 read: IOPS=2649, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:28:11.787 slat (nsec): min=5755, max=70334, avg=10677.53, stdev=5738.93 00:28:11.787 clat (usec): min=1666, max=47044, avg=2993.06, stdev=1151.20 00:28:11.787 lat (usec): min=1672, max=47066, avg=3003.74, stdev=1151.20 00:28:11.787 clat percentiles (usec): 00:28:11.787 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:28:11.787 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:28:11.787 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3687], 00:28:11.787 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[46924], 00:28:11.787 | 99.99th=[46924] 00:28:11.787 bw ( KiB/s): min=19472, max=21760, per=24.27%, avg=21139.56, stdev=705.58, samples=9 00:28:11.787 iops : min= 2434, max= 2720, avg=2642.44, stdev=88.20, samples=9 00:28:11.787 lat (msec) : 2=0.63%, 4=97.45%, 10=1.86%, 50=0.06% 00:28:11.787 cpu : usr=94.70%, sys=4.88%, ctx=66, majf=0, minf=79 00:28:11.787 IO depths : 1=0.1%, 2=1.2%, 4=65.0%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 issued rwts: total=13251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.787 filename1: (groupid=0, jobs=1): err= 0: pid=3755134: Wed May 15 00:10:12 2024 00:28:11.787 read: IOPS=2665, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:28:11.787 slat (nsec): min=5794, max=71844, avg=11001.59, stdev=5701.24 00:28:11.787 clat (usec): min=1149, max=45160, avg=2974.15, stdev=1484.31 00:28:11.787 lat (usec): min=1155, max=45176, avg=2985.15, stdev=1484.37 00:28:11.787 clat percentiles (usec): 00:28:11.787 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2671], 00:28:11.787 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:28:11.787 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3621], 00:28:11.787 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[43254], 99.95th=[44827], 00:28:11.787 | 99.99th=[45351] 00:28:11.787 bw ( KiB/s): min=19360, max=22304, per=24.40%, avg=21254.22, stdev=925.15, samples=9 00:28:11.787 iops : min= 2420, max= 2788, avg=2656.78, stdev=115.64, samples=9 00:28:11.787 lat (msec) : 2=1.10%, 4=97.44%, 10=1.34%, 50=0.12% 00:28:11.787 cpu : usr=92.50%, sys=6.10%, ctx=206, majf=0, minf=77 00:28:11.787 IO depths : 1=0.1%, 2=1.0%, 4=65.1%, 8=33.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.787 issued rwts: total=13331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.787 00:28:11.787 Run status group 0 (all jobs): 00:28:11.787 READ: bw=85.1MiB/s (89.2MB/s), 20.7MiB/s-22.9MiB/s (21.7MB/s-24.0MB/s), io=426MiB (446MB), run=5002-5002msec 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.787 00:28:11.787 real 0m24.447s 00:28:11.787 user 4m53.864s 00:28:11.787 sys 0m10.002s 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.787 ************************************ 00:28:11.787 END TEST fio_dif_rand_params 00:28:11.787 ************************************ 00:28:11.787 00:10:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:11.787 00:10:12 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:11.787 00:10:12 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:11.787 00:10:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:12.047 ************************************ 00:28:12.047 START TEST fio_dif_digest 00:28:12.047 ************************************ 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.047 bdev_null0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.047 [2024-05-15 00:10:12.414770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.047 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.047 { 00:28:12.047 "params": { 00:28:12.047 "name": "Nvme$subsystem", 00:28:12.047 "trtype": "$TEST_TRANSPORT", 00:28:12.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.047 "adrfam": "ipv4", 00:28:12.047 "trsvcid": "$NVMF_PORT", 00:28:12.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.048 "hdgst": ${hdgst:-false}, 00:28:12.048 "ddgst": ${ddgst:-false} 00:28:12.048 }, 00:28:12.048 "method": "bdev_nvme_attach_controller" 00:28:12.048 } 00:28:12.048 EOF 00:28:12.048 )") 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.048 "params": { 00:28:12.048 "name": "Nvme0", 00:28:12.048 "trtype": "tcp", 00:28:12.048 "traddr": "10.0.0.2", 00:28:12.048 "adrfam": "ipv4", 00:28:12.048 "trsvcid": "4420", 00:28:12.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:12.048 "hdgst": true, 00:28:12.048 "ddgst": true 00:28:12.048 }, 00:28:12.048 "method": "bdev_nvme_attach_controller" 00:28:12.048 }' 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:12.048 00:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:12.306 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:12.307 ... 00:28:12.307 fio-3.35 00:28:12.307 Starting 3 threads 00:28:12.307 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.520 00:28:24.520 filename0: (groupid=0, jobs=1): err= 0: pid=3756337: Wed May 15 00:10:23 2024 00:28:24.520 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(356MiB/10047msec) 00:28:24.520 slat (nsec): min=3968, max=19126, avg=11075.47, stdev=1834.63 00:28:24.520 clat (usec): min=6551, max=54922, avg=10559.20, stdev=2872.45 00:28:24.520 lat (usec): min=6563, max=54934, avg=10570.28, stdev=2872.54 00:28:24.520 clat percentiles (usec): 00:28:24.520 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9634], 00:28:24.520 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:28:24.520 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12256], 00:28:24.520 | 99.00th=[13304], 99.50th=[14615], 99.90th=[53216], 99.95th=[53740], 00:28:24.520 | 99.99th=[54789] 00:28:24.520 bw ( KiB/s): min=32000, max=39936, per=35.59%, avg=36403.20, stdev=1772.84, samples=20 00:28:24.520 iops : min= 250, max= 312, avg=284.40, stdev=13.85, samples=20 00:28:24.520 lat (msec) : 10=29.54%, 20=70.07%, 50=0.04%, 100=0.35% 00:28:24.520 cpu : usr=91.55%, sys=8.08%, ctx=16, majf=0, minf=98 00:28:24.520 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 issued rwts: total=2847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.520 filename0: (groupid=0, jobs=1): err= 0: pid=3756338: Wed May 15 00:10:23 2024 00:28:24.520 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(327MiB/10048msec) 00:28:24.520 slat (nsec): min=6201, max=29523, avg=11080.26, stdev=1866.49 00:28:24.520 clat (usec): min=5001, max=99825, avg=11484.30, stdev=5687.32 00:28:24.520 lat (usec): min=5010, max=99836, avg=11495.38, stdev=5687.39 00:28:24.520 clat percentiles (msec): 00:28:24.520 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:28:24.520 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:28:24.520 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:28:24.520 | 99.00th=[ 54], 99.50th=[ 56], 99.90th=[ 61], 99.95th=[ 100], 00:28:24.520 | 99.99th=[ 101] 00:28:24.520 bw ( KiB/s): min=27904, max=37888, per=32.72%, avg=33472.00, stdev=2982.99, samples=20 00:28:24.520 iops : min= 218, max= 296, avg=261.50, stdev=23.30, samples=20 00:28:24.520 lat (msec) : 10=17.23%, 20=81.40%, 50=0.04%, 100=1.34% 00:28:24.520 cpu : usr=91.28%, sys=8.37%, ctx=18, majf=0, minf=138 00:28:24.520 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 issued rwts: total=2618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.520 filename0: (groupid=0, jobs=1): err= 0: pid=3756339: Wed May 15 00:10:23 2024 00:28:24.520 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(321MiB/10046msec) 00:28:24.520 slat (nsec): min=6219, max=24552, avg=11079.77, stdev=1892.88 00:28:24.520 clat (usec): min=5451, max=58625, avg=11719.67, stdev=5423.47 00:28:24.520 lat (usec): min=5458, max=58632, avg=11730.75, stdev=5423.43 00:28:24.520 clat percentiles (usec): 00:28:24.520 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10421], 00:28:24.520 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:28:24.520 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13173], 00:28:24.520 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:28:24.520 | 99.99th=[58459] 00:28:24.520 bw ( KiB/s): min=24064, max=37632, per=32.06%, avg=32793.60, stdev=3783.35, samples=20 00:28:24.520 iops : min= 188, max= 294, avg=256.20, stdev=29.56, samples=20 00:28:24.520 lat (msec) : 10=12.83%, 20=85.69%, 50=0.08%, 100=1.40% 00:28:24.520 cpu : usr=90.96%, sys=8.70%, ctx=21, majf=0, minf=141 00:28:24.520 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:24.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.520 issued rwts: total=2565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.520 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:24.520 00:28:24.520 Run status group 0 (all jobs): 00:28:24.520 READ: bw=99.9MiB/s (105MB/s), 31.9MiB/s-35.4MiB/s (33.5MB/s-37.1MB/s), io=1004MiB (1053MB), run=10046-10048msec 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.520 00:28:24.520 real 0m11.196s 00:28:24.520 user 0m35.675s 00:28:24.520 sys 0m2.863s 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:24.520 00:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.520 ************************************ 00:28:24.520 END TEST fio_dif_digest 00:28:24.520 ************************************ 00:28:24.520 00:10:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:24.520 00:10:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.520 rmmod nvme_tcp 00:28:24.520 rmmod nvme_fabrics 00:28:24.520 rmmod nvme_keyring 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3747440 ']' 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3747440 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3747440 ']' 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3747440 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3747440 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3747440' 00:28:24.520 killing process with pid 3747440 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3747440 00:28:24.520 [2024-05-15 00:10:23.742495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:24.520 00:10:23 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3747440 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:24.520 00:10:23 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:26.428 Waiting for block devices as requested 00:28:26.688 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:26.688 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:26.688 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:26.688 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:27.012 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:27.012 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:27.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:27.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:27.271 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:27.271 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:27.271 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:27.531 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:27.531 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:27.531 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:27.531 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:27.790 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:27.790 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:28.049 00:10:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.049 00:10:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.049 00:10:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.049 00:10:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.049 00:10:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.049 00:10:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:28.049 00:10:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.953 00:10:30 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.953 00:28:29.953 real 1m16.254s 00:28:29.953 user 7m14.042s 00:28:29.953 sys 0m30.703s 00:28:29.953 00:10:30 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.953 00:10:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.953 ************************************ 00:28:29.953 END TEST nvmf_dif 00:28:29.953 ************************************ 00:28:30.211 00:10:30 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:30.212 00:10:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:30.212 00:10:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.212 00:10:30 -- common/autotest_common.sh@10 -- # set +x 00:28:30.212 ************************************ 00:28:30.212 START TEST nvmf_abort_qd_sizes 00:28:30.212 ************************************ 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:30.212 * Looking for test storage... 00:28:30.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.212 00:10:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:36.781 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:36.781 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:36.781 Found net devices under 0000:af:00.0: cvl_0_0 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:36.781 Found net devices under 0000:af:00.1: cvl_0_1 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.781 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:28:37.041 00:28:37.041 --- 10.0.0.2 ping statistics --- 00:28:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.041 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:28:37.041 00:28:37.041 --- 10.0.0.1 ping statistics --- 00:28:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.041 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:37.041 00:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:40.329 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:40.329 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:40.588 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:40.588 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:40.588 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:42.495 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3764886 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3764886 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3764886 ']' 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:42.495 00:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:42.495 [2024-05-15 00:10:42.782475] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:28:42.495 [2024-05-15 00:10:42.782522] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.495 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.495 [2024-05-15 00:10:42.856066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.495 [2024-05-15 00:10:42.932420] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.495 [2024-05-15 00:10:42.932462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.495 [2024-05-15 00:10:42.932472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.495 [2024-05-15 00:10:42.932480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.495 [2024-05-15 00:10:42.932487] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.495 [2024-05-15 00:10:42.932533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.495 [2024-05-15 00:10:42.932551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.495 [2024-05-15 00:10:42.932647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.495 [2024-05-15 00:10:42.932649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.063 00:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:43.322 ************************************ 00:28:43.322 START TEST spdk_target_abort 00:28:43.322 ************************************ 00:28:43.322 00:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:28:43.322 00:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:43.322 00:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:28:43.322 00:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.322 00:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 spdk_targetn1 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 [2024-05-15 00:10:46.536018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 [2024-05-15 00:10:46.572061] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:46.610 [2024-05-15 00:10:46.572326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.610 00:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.610 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.206 Initializing NVMe Controllers 00:28:49.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.206 Initialization complete. Launching workers. 00:28:49.206 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6540, failed: 0 00:28:49.206 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1629, failed to submit 4911 00:28:49.206 success 931, unsuccess 698, failed 0 00:28:49.206 00:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.206 00:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.206 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.493 Initializing NVMe Controllers 00:28:52.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:52.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:52.493 Initialization complete. Launching workers. 00:28:52.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8546, failed: 0 00:28:52.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7292 00:28:52.493 success 313, unsuccess 941, failed 0 00:28:52.493 00:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:52.493 00:10:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:52.493 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.780 Initializing NVMe Controllers 00:28:55.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:55.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:55.780 Initialization complete. Launching workers. 00:28:55.780 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35579, failed: 0 00:28:55.780 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2677, failed to submit 32902 00:28:55.780 success 692, unsuccess 1985, failed 0 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.780 00:10:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3764886 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3764886 ']' 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3764886 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3764886 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3764886' 00:28:57.685 killing process with pid 3764886 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3764886 00:28:57.685 [2024-05-15 00:10:58.207563] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:57.685 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3764886 00:28:57.945 00:28:57.945 real 0m14.727s 00:28:57.945 user 0m58.237s 00:28:57.945 sys 0m2.711s 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:57.945 ************************************ 00:28:57.945 END TEST spdk_target_abort 00:28:57.945 ************************************ 00:28:57.945 00:10:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:57.945 00:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:57.945 00:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:57.945 00:10:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:57.945 ************************************ 00:28:57.945 START TEST kernel_target_abort 00:28:57.945 ************************************ 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:57.945 00:10:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:01.233 Waiting for block devices as requested 00:29:01.233 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:01.233 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:01.498 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:01.498 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:01.498 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:01.761 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:01.761 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:01.761 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:02.020 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:02.020 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:02.020 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:02.279 No valid GPT data, bailing 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:02.279 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:29:02.539 00:29:02.539 Discovery Log Number of Records 2, Generation counter 2 00:29:02.539 =====Discovery Log Entry 0====== 00:29:02.539 trtype: tcp 00:29:02.539 adrfam: ipv4 00:29:02.539 subtype: current discovery subsystem 00:29:02.539 treq: not specified, sq flow control disable supported 00:29:02.539 portid: 1 00:29:02.539 trsvcid: 4420 00:29:02.539 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:02.539 traddr: 10.0.0.1 00:29:02.539 eflags: none 00:29:02.539 sectype: none 00:29:02.539 =====Discovery Log Entry 1====== 00:29:02.539 trtype: tcp 00:29:02.539 adrfam: ipv4 00:29:02.539 subtype: nvme subsystem 00:29:02.539 treq: not specified, sq flow control disable supported 00:29:02.539 portid: 1 00:29:02.539 trsvcid: 4420 00:29:02.539 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:02.539 traddr: 10.0.0.1 00:29:02.539 eflags: none 00:29:02.539 sectype: none 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.539 00:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.539 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.830 Initializing NVMe Controllers 00:29:05.830 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:05.830 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:05.830 Initialization complete. Launching workers. 00:29:05.830 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59935, failed: 0 00:29:05.830 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 59935, failed to submit 0 00:29:05.830 success 0, unsuccess 59935, failed 0 00:29:05.830 00:11:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:05.830 00:11:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:05.830 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.120 Initializing NVMe Controllers 00:29:09.120 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:09.120 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:09.120 Initialization complete. Launching workers. 00:29:09.120 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 110066, failed: 0 00:29:09.120 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27790, failed to submit 82276 00:29:09.120 success 0, unsuccess 27790, failed 0 00:29:09.120 00:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:09.120 00:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:09.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.735 Initializing NVMe Controllers 00:29:11.735 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.735 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:11.735 Initialization complete. Launching workers. 00:29:11.735 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105551, failed: 0 00:29:11.735 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26390, failed to submit 79161 00:29:11.735 success 0, unsuccess 26390, failed 0 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:11.735 00:11:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:15.020 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:15.020 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:16.399 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:29:16.400 00:29:16.400 real 0m18.390s 00:29:16.400 user 0m6.479s 00:29:16.400 sys 0m5.914s 00:29:16.400 00:11:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:16.400 00:11:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:16.400 ************************************ 00:29:16.400 END TEST kernel_target_abort 00:29:16.400 ************************************ 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.400 00:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:16.400 rmmod nvme_tcp 00:29:16.400 rmmod nvme_fabrics 00:29:16.658 rmmod nvme_keyring 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3764886 ']' 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3764886 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3764886 ']' 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3764886 00:29:16.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3764886) - No such process 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3764886 is not found' 00:29:16.658 Process with pid 3764886 is not found 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:16.658 00:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:19.190 Waiting for block devices as requested 00:29:19.449 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:19.449 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:19.449 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:19.449 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:19.708 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:19.708 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:19.708 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:19.967 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:19.967 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:19.967 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:20.225 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:20.225 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:20.225 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:20.225 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:20.485 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:20.485 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:20.485 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:20.744 00:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.277 00:11:23 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.277 00:29:23.277 real 0m52.660s 00:29:23.277 user 1m9.255s 00:29:23.277 sys 0m18.658s 00:29:23.277 00:11:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:23.277 00:11:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:23.277 ************************************ 00:29:23.277 END TEST nvmf_abort_qd_sizes 00:29:23.277 ************************************ 00:29:23.277 00:11:23 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:23.277 00:11:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:23.277 00:11:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:23.277 00:11:23 -- common/autotest_common.sh@10 -- # set +x 00:29:23.277 ************************************ 00:29:23.277 START TEST keyring_file 00:29:23.277 ************************************ 00:29:23.277 00:11:23 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:23.277 * Looking for test storage... 00:29:23.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:23.277 00:11:23 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:23.277 00:11:23 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.277 00:11:23 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:23.277 00:11:23 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.277 00:11:23 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.277 00:11:23 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.278 00:11:23 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.278 00:11:23 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.278 00:11:23 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.278 00:11:23 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.278 00:11:23 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.278 00:11:23 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.278 00:11:23 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:23.278 00:11:23 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bUqGFlM8Mm 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bUqGFlM8Mm 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bUqGFlM8Mm 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bUqGFlM8Mm 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cIa7m7x2nP 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:23.278 00:11:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cIa7m7x2nP 00:29:23.278 00:11:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cIa7m7x2nP 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cIa7m7x2nP 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=3774235 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3774235 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3774235 ']' 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:23.278 00:11:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:23.278 00:11:23 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:23.278 [2024-05-15 00:11:23.618628] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:29:23.278 [2024-05-15 00:11:23.618680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774235 ] 00:29:23.278 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.278 [2024-05-15 00:11:23.686747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.278 [2024-05-15 00:11:23.760183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.844 00:11:24 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:23.844 00:11:24 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:23.844 00:11:24 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:23.844 00:11:24 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.844 00:11:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:23.844 [2024-05-15 00:11:24.408822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.844 null0 00:29:24.105 [2024-05-15 00:11:24.440864] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:24.105 [2024-05-15 00:11:24.440926] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:24.105 [2024-05-15 00:11:24.441217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:24.105 [2024-05-15 00:11:24.448900] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.105 00:11:24 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.105 [2024-05-15 00:11:24.460931] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:24.105 request: 00:29:24.105 { 00:29:24.105 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.105 "secure_channel": false, 00:29:24.105 "listen_address": { 00:29:24.105 "trtype": "tcp", 00:29:24.105 "traddr": "127.0.0.1", 00:29:24.105 "trsvcid": "4420" 00:29:24.105 }, 00:29:24.105 "method": "nvmf_subsystem_add_listener", 00:29:24.105 "req_id": 1 00:29:24.105 } 00:29:24.105 Got JSON-RPC error response 00:29:24.105 response: 00:29:24.105 { 00:29:24.105 "code": -32602, 00:29:24.105 "message": "Invalid parameters" 00:29:24.105 } 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:24.105 00:11:24 keyring_file -- keyring/file.sh@46 -- # bperfpid=3774359 00:29:24.105 00:11:24 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3774359 /var/tmp/bperf.sock 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3774359 ']' 00:29:24.105 00:11:24 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.106 00:11:24 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:24.106 00:11:24 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:24.106 00:11:24 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.106 00:11:24 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:24.106 00:11:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.106 [2024-05-15 00:11:24.511000] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:29:24.106 [2024-05-15 00:11:24.511044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774359 ] 00:29:24.106 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.106 [2024-05-15 00:11:24.580200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.106 [2024-05-15 00:11:24.654911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.040 00:11:25 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:25.040 00:11:25 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:25.040 00:11:25 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:25.040 00:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:25.040 00:11:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cIa7m7x2nP 00:29:25.040 00:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cIa7m7x2nP 00:29:25.298 00:11:25 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:25.298 00:11:25 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.298 00:11:25 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bUqGFlM8Mm == \/\t\m\p\/\t\m\p\.\b\U\q\G\F\l\M\8\M\m ]] 00:29:25.298 00:11:25 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.298 00:11:25 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.298 00:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:25.556 00:11:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cIa7m7x2nP == \/\t\m\p\/\t\m\p\.\c\I\a\7\m\7\x\2\n\P ]] 00:29:25.556 00:11:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:25.556 00:11:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:25.556 00:11:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.556 00:11:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.556 00:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.556 00:11:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.814 00:11:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:25.814 00:11:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:25.814 00:11:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:25.814 00:11:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.814 00:11:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:25.814 00:11:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.814 00:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.814 00:11:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:25.815 00:11:26 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:25.815 00:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.072 [2024-05-15 00:11:26.533163] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.072 nvme0n1 00:29:26.072 00:11:26 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:26.072 00:11:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:26.072 00:11:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.072 00:11:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.072 00:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.072 00:11:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:26.330 00:11:26 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:26.330 00:11:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:26.330 00:11:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:26.330 00:11:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.330 00:11:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.330 00:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.330 00:11:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:26.587 00:11:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:26.587 00:11:26 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.587 Running I/O for 1 seconds... 00:29:27.547 00:29:27.547 Latency(us) 00:29:27.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.547 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:27.547 nvme0n1 : 1.01 9303.62 36.34 0.00 0.00 13687.38 8650.75 24536.68 00:29:27.547 =================================================================================================================== 00:29:27.548 Total : 9303.62 36.34 0.00 0.00 13687.38 8650.75 24536.68 00:29:27.548 0 00:29:27.548 00:11:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:27.548 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:27.821 00:11:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:27.821 00:11:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:27.821 00:11:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:27.821 00:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.821 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.821 00:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:28.079 00:11:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:28.079 00:11:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:28.079 00:11:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:28.079 00:11:28 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:28.079 00:11:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.079 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:28.336 [2024-05-15 00:11:28.789698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:28.336 [2024-05-15 00:11:28.790161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11820e0 (107): Transport endpoint is not connected 00:29:28.336 [2024-05-15 00:11:28.791154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11820e0 (9): Bad file descriptor 00:29:28.336 [2024-05-15 00:11:28.792154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:28.336 [2024-05-15 00:11:28.792166] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:28.336 [2024-05-15 00:11:28.792175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:28.336 request: 00:29:28.336 { 00:29:28.336 "name": "nvme0", 00:29:28.336 "trtype": "tcp", 00:29:28.336 "traddr": "127.0.0.1", 00:29:28.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.336 "adrfam": "ipv4", 00:29:28.336 "trsvcid": "4420", 00:29:28.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.336 "psk": "key1", 00:29:28.336 "method": "bdev_nvme_attach_controller", 00:29:28.336 "req_id": 1 00:29:28.336 } 00:29:28.336 Got JSON-RPC error response 00:29:28.336 response: 00:29:28.336 { 00:29:28.336 "code": -32602, 00:29:28.336 "message": "Invalid parameters" 00:29:28.336 } 00:29:28.336 00:11:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:28.336 00:11:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:28.336 00:11:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:28.336 00:11:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:28.336 00:11:28 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:28.336 00:11:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:28.336 00:11:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.336 00:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.336 00:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:28.336 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.594 00:11:28 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:28.594 00:11:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:28.594 00:11:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:28.594 00:11:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.594 00:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:28.594 00:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.594 00:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.594 00:11:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:28.594 00:11:29 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:28.594 00:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:28.851 00:11:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:28.852 00:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:29.109 00:11:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:29.109 00:11:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:29.109 00:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.109 00:11:29 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:29.109 00:11:29 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bUqGFlM8Mm 00:29:29.109 00:11:29 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.109 00:11:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.109 00:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.366 [2024-05-15 00:11:29.834164] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bUqGFlM8Mm': 0100660 00:29:29.366 [2024-05-15 00:11:29.834200] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:29.366 request: 00:29:29.366 { 00:29:29.366 "name": "key0", 00:29:29.366 "path": "/tmp/tmp.bUqGFlM8Mm", 00:29:29.366 "method": "keyring_file_add_key", 00:29:29.366 "req_id": 1 00:29:29.366 } 00:29:29.366 Got JSON-RPC error response 00:29:29.366 response: 00:29:29.366 { 00:29:29.366 "code": -1, 00:29:29.366 "message": "Operation not permitted" 00:29:29.366 } 00:29:29.366 00:11:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:29.366 00:11:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:29.366 00:11:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:29.366 00:11:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:29.366 00:11:29 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bUqGFlM8Mm 00:29:29.366 00:11:29 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.366 00:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bUqGFlM8Mm 00:29:29.623 00:11:30 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bUqGFlM8Mm 00:29:29.623 00:11:30 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:29.623 00:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:29.623 00:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:29.623 00:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.623 00:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.623 00:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:29.623 00:11:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:29.623 00:11:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:29.623 00:11:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:29.624 00:11:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:29.624 00:11:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:29.624 00:11:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.624 00:11:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:29.881 00:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:29.881 [2024-05-15 00:11:30.371571] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bUqGFlM8Mm': No such file or directory 00:29:29.881 [2024-05-15 00:11:30.371598] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:29.881 [2024-05-15 00:11:30.371620] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:29.881 [2024-05-15 00:11:30.371628] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:29.881 [2024-05-15 00:11:30.371640] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:29.881 request: 00:29:29.881 { 00:29:29.881 "name": "nvme0", 00:29:29.881 "trtype": "tcp", 00:29:29.881 "traddr": "127.0.0.1", 00:29:29.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.881 "adrfam": "ipv4", 00:29:29.881 "trsvcid": "4420", 00:29:29.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.881 "psk": "key0", 00:29:29.881 "method": "bdev_nvme_attach_controller", 00:29:29.881 "req_id": 1 00:29:29.881 } 00:29:29.881 Got JSON-RPC error response 00:29:29.881 response: 00:29:29.881 { 00:29:29.881 "code": -19, 00:29:29.881 "message": "No such device" 00:29:29.881 } 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:29.881 00:11:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:29.881 00:11:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:29.881 00:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:30.139 00:11:30 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S7s0c2rFcw 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:30.139 00:11:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S7s0c2rFcw 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S7s0c2rFcw 00:29:30.139 00:11:30 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.S7s0c2rFcw 00:29:30.139 00:11:30 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7s0c2rFcw 00:29:30.139 00:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7s0c2rFcw 00:29:30.396 00:11:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:30.396 00:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:30.654 nvme0n1 00:29:30.654 00:11:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:30.654 00:11:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:30.654 00:11:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:30.655 00:11:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:30.655 00:11:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:30.655 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.655 00:11:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:30.655 00:11:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:30.655 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:30.912 00:11:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:30.912 00:11:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:30.912 00:11:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:30.912 00:11:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:30.912 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.170 00:11:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:31.170 00:11:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.170 00:11:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:31.170 00:11:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:31.170 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:31.427 00:11:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:31.427 00:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.427 00:11:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:31.684 00:11:32 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:31.684 00:11:32 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7s0c2rFcw 00:29:31.684 00:11:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7s0c2rFcw 00:29:31.940 00:11:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cIa7m7x2nP 00:29:31.941 00:11:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cIa7m7x2nP 00:29:31.941 00:11:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.941 00:11:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.198 nvme0n1 00:29:32.198 00:11:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:32.198 00:11:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:32.456 00:11:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:32.456 "subsystems": [ 00:29:32.456 { 00:29:32.456 "subsystem": "keyring", 00:29:32.456 "config": [ 00:29:32.456 { 00:29:32.456 "method": "keyring_file_add_key", 00:29:32.456 "params": { 00:29:32.456 "name": "key0", 00:29:32.456 "path": "/tmp/tmp.S7s0c2rFcw" 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "keyring_file_add_key", 00:29:32.456 "params": { 00:29:32.456 "name": "key1", 00:29:32.456 "path": "/tmp/tmp.cIa7m7x2nP" 00:29:32.456 } 00:29:32.456 } 00:29:32.456 ] 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "subsystem": "iobuf", 00:29:32.456 "config": [ 00:29:32.456 { 00:29:32.456 "method": "iobuf_set_options", 00:29:32.456 "params": { 00:29:32.456 "small_pool_count": 8192, 00:29:32.456 "large_pool_count": 1024, 00:29:32.456 "small_bufsize": 8192, 00:29:32.456 "large_bufsize": 135168 00:29:32.456 } 00:29:32.456 } 00:29:32.456 ] 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "subsystem": "sock", 00:29:32.456 "config": [ 00:29:32.456 { 00:29:32.456 "method": "sock_impl_set_options", 00:29:32.456 "params": { 00:29:32.456 "impl_name": "posix", 00:29:32.456 "recv_buf_size": 2097152, 00:29:32.456 "send_buf_size": 2097152, 00:29:32.456 "enable_recv_pipe": true, 00:29:32.456 "enable_quickack": false, 00:29:32.456 "enable_placement_id": 0, 00:29:32.456 "enable_zerocopy_send_server": true, 00:29:32.456 "enable_zerocopy_send_client": false, 00:29:32.456 "zerocopy_threshold": 0, 00:29:32.456 "tls_version": 0, 00:29:32.456 "enable_ktls": false 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "sock_impl_set_options", 00:29:32.456 "params": { 00:29:32.456 "impl_name": "ssl", 00:29:32.456 "recv_buf_size": 4096, 00:29:32.456 "send_buf_size": 4096, 00:29:32.456 "enable_recv_pipe": true, 00:29:32.456 "enable_quickack": false, 00:29:32.456 "enable_placement_id": 0, 00:29:32.456 "enable_zerocopy_send_server": true, 00:29:32.456 "enable_zerocopy_send_client": false, 00:29:32.456 "zerocopy_threshold": 0, 00:29:32.456 "tls_version": 0, 00:29:32.456 "enable_ktls": false 00:29:32.456 } 00:29:32.456 } 00:29:32.456 ] 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "subsystem": "vmd", 00:29:32.456 "config": [] 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "subsystem": "accel", 00:29:32.456 "config": [ 00:29:32.456 { 00:29:32.456 "method": "accel_set_options", 00:29:32.456 "params": { 00:29:32.456 "small_cache_size": 128, 00:29:32.456 "large_cache_size": 16, 00:29:32.456 "task_count": 2048, 00:29:32.456 "sequence_count": 2048, 00:29:32.456 "buf_count": 2048 00:29:32.456 } 00:29:32.456 } 00:29:32.456 ] 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "subsystem": "bdev", 00:29:32.456 "config": [ 00:29:32.456 { 00:29:32.456 "method": "bdev_set_options", 00:29:32.456 "params": { 00:29:32.456 "bdev_io_pool_size": 65535, 00:29:32.456 "bdev_io_cache_size": 256, 00:29:32.456 "bdev_auto_examine": true, 00:29:32.456 "iobuf_small_cache_size": 128, 00:29:32.456 "iobuf_large_cache_size": 16 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "bdev_raid_set_options", 00:29:32.456 "params": { 00:29:32.456 "process_window_size_kb": 1024 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "bdev_iscsi_set_options", 00:29:32.456 "params": { 00:29:32.456 "timeout_sec": 30 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "bdev_nvme_set_options", 00:29:32.456 "params": { 00:29:32.456 "action_on_timeout": "none", 00:29:32.456 "timeout_us": 0, 00:29:32.456 "timeout_admin_us": 0, 00:29:32.456 "keep_alive_timeout_ms": 10000, 00:29:32.456 "arbitration_burst": 0, 00:29:32.456 "low_priority_weight": 0, 00:29:32.456 "medium_priority_weight": 0, 00:29:32.456 "high_priority_weight": 0, 00:29:32.456 "nvme_adminq_poll_period_us": 10000, 00:29:32.456 "nvme_ioq_poll_period_us": 0, 00:29:32.456 "io_queue_requests": 512, 00:29:32.456 "delay_cmd_submit": true, 00:29:32.456 "transport_retry_count": 4, 00:29:32.456 "bdev_retry_count": 3, 00:29:32.456 "transport_ack_timeout": 0, 00:29:32.456 "ctrlr_loss_timeout_sec": 0, 00:29:32.456 "reconnect_delay_sec": 0, 00:29:32.456 "fast_io_fail_timeout_sec": 0, 00:29:32.456 "disable_auto_failback": false, 00:29:32.456 "generate_uuids": false, 00:29:32.456 "transport_tos": 0, 00:29:32.456 "nvme_error_stat": false, 00:29:32.456 "rdma_srq_size": 0, 00:29:32.456 "io_path_stat": false, 00:29:32.456 "allow_accel_sequence": false, 00:29:32.456 "rdma_max_cq_size": 0, 00:29:32.456 "rdma_cm_event_timeout_ms": 0, 00:29:32.456 "dhchap_digests": [ 00:29:32.456 "sha256", 00:29:32.456 "sha384", 00:29:32.456 "sha512" 00:29:32.456 ], 00:29:32.456 "dhchap_dhgroups": [ 00:29:32.456 "null", 00:29:32.456 "ffdhe2048", 00:29:32.456 "ffdhe3072", 00:29:32.456 "ffdhe4096", 00:29:32.456 "ffdhe6144", 00:29:32.456 "ffdhe8192" 00:29:32.456 ] 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "bdev_nvme_attach_controller", 00:29:32.456 "params": { 00:29:32.456 "name": "nvme0", 00:29:32.456 "trtype": "TCP", 00:29:32.456 "adrfam": "IPv4", 00:29:32.456 "traddr": "127.0.0.1", 00:29:32.456 "trsvcid": "4420", 00:29:32.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.456 "prchk_reftag": false, 00:29:32.456 "prchk_guard": false, 00:29:32.456 "ctrlr_loss_timeout_sec": 0, 00:29:32.456 "reconnect_delay_sec": 0, 00:29:32.456 "fast_io_fail_timeout_sec": 0, 00:29:32.456 "psk": "key0", 00:29:32.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:32.456 "hdgst": false, 00:29:32.456 "ddgst": false 00:29:32.456 } 00:29:32.456 }, 00:29:32.456 { 00:29:32.456 "method": "bdev_nvme_set_hotplug", 00:29:32.456 "params": { 00:29:32.457 "period_us": 100000, 00:29:32.457 "enable": false 00:29:32.457 } 00:29:32.457 }, 00:29:32.457 { 00:29:32.457 "method": "bdev_wait_for_examine" 00:29:32.457 } 00:29:32.457 ] 00:29:32.457 }, 00:29:32.457 { 00:29:32.457 "subsystem": "nbd", 00:29:32.457 "config": [] 00:29:32.457 } 00:29:32.457 ] 00:29:32.457 }' 00:29:32.457 00:11:32 keyring_file -- keyring/file.sh@114 -- # killprocess 3774359 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3774359 ']' 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3774359 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3774359 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3774359' 00:29:32.457 killing process with pid 3774359 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@965 -- # kill 3774359 00:29:32.457 Received shutdown signal, test time was about 1.000000 seconds 00:29:32.457 00:29:32.457 Latency(us) 00:29:32.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.457 =================================================================================================================== 00:29:32.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.457 00:11:32 keyring_file -- common/autotest_common.sh@970 -- # wait 3774359 00:29:32.714 00:11:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=3775910 00:29:32.714 00:11:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3775910 /var/tmp/bperf.sock 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3775910 ']' 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.714 00:11:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:32.714 00:11:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:32.714 00:11:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:32.714 "subsystems": [ 00:29:32.714 { 00:29:32.714 "subsystem": "keyring", 00:29:32.714 "config": [ 00:29:32.714 { 00:29:32.714 "method": "keyring_file_add_key", 00:29:32.714 "params": { 00:29:32.714 "name": "key0", 00:29:32.714 "path": "/tmp/tmp.S7s0c2rFcw" 00:29:32.714 } 00:29:32.714 }, 00:29:32.714 { 00:29:32.714 "method": "keyring_file_add_key", 00:29:32.714 "params": { 00:29:32.714 "name": "key1", 00:29:32.714 "path": "/tmp/tmp.cIa7m7x2nP" 00:29:32.715 } 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "iobuf", 00:29:32.715 "config": [ 00:29:32.715 { 00:29:32.715 "method": "iobuf_set_options", 00:29:32.715 "params": { 00:29:32.715 "small_pool_count": 8192, 00:29:32.715 "large_pool_count": 1024, 00:29:32.715 "small_bufsize": 8192, 00:29:32.715 "large_bufsize": 135168 00:29:32.715 } 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "sock", 00:29:32.715 "config": [ 00:29:32.715 { 00:29:32.715 "method": "sock_impl_set_options", 00:29:32.715 "params": { 00:29:32.715 "impl_name": "posix", 00:29:32.715 "recv_buf_size": 2097152, 00:29:32.715 "send_buf_size": 2097152, 00:29:32.715 "enable_recv_pipe": true, 00:29:32.715 "enable_quickack": false, 00:29:32.715 "enable_placement_id": 0, 00:29:32.715 "enable_zerocopy_send_server": true, 00:29:32.715 "enable_zerocopy_send_client": false, 00:29:32.715 "zerocopy_threshold": 0, 00:29:32.715 "tls_version": 0, 00:29:32.715 "enable_ktls": false 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "sock_impl_set_options", 00:29:32.715 "params": { 00:29:32.715 "impl_name": "ssl", 00:29:32.715 "recv_buf_size": 4096, 00:29:32.715 "send_buf_size": 4096, 00:29:32.715 "enable_recv_pipe": true, 00:29:32.715 "enable_quickack": false, 00:29:32.715 "enable_placement_id": 0, 00:29:32.715 "enable_zerocopy_send_server": true, 00:29:32.715 "enable_zerocopy_send_client": false, 00:29:32.715 "zerocopy_threshold": 0, 00:29:32.715 "tls_version": 0, 00:29:32.715 "enable_ktls": false 00:29:32.715 } 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "vmd", 00:29:32.715 "config": [] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "accel", 00:29:32.715 "config": [ 00:29:32.715 { 00:29:32.715 "method": "accel_set_options", 00:29:32.715 "params": { 00:29:32.715 "small_cache_size": 128, 00:29:32.715 "large_cache_size": 16, 00:29:32.715 "task_count": 2048, 00:29:32.715 "sequence_count": 2048, 00:29:32.715 "buf_count": 2048 00:29:32.715 } 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "bdev", 00:29:32.715 "config": [ 00:29:32.715 { 00:29:32.715 "method": "bdev_set_options", 00:29:32.715 "params": { 00:29:32.715 "bdev_io_pool_size": 65535, 00:29:32.715 "bdev_io_cache_size": 256, 00:29:32.715 "bdev_auto_examine": true, 00:29:32.715 "iobuf_small_cache_size": 128, 00:29:32.715 "iobuf_large_cache_size": 16 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_raid_set_options", 00:29:32.715 "params": { 00:29:32.715 "process_window_size_kb": 1024 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_iscsi_set_options", 00:29:32.715 "params": { 00:29:32.715 "timeout_sec": 30 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_nvme_set_options", 00:29:32.715 "params": { 00:29:32.715 "action_on_timeout": "none", 00:29:32.715 "timeout_us": 0, 00:29:32.715 "timeout_admin_us": 0, 00:29:32.715 "keep_alive_timeout_ms": 10000, 00:29:32.715 "arbitration_burst": 0, 00:29:32.715 "low_priority_weight": 0, 00:29:32.715 "medium_priority_weight": 0, 00:29:32.715 "high_priority_weight": 0, 00:29:32.715 "nvme_adminq_poll_period_us": 10000, 00:29:32.715 "nvme_ioq_poll_period_us": 0, 00:29:32.715 "io_queue_requests": 512, 00:29:32.715 "delay_cmd_submit": true, 00:29:32.715 "transport_retry_count": 4, 00:29:32.715 "bdev_retry_count": 3, 00:29:32.715 "transport_ack_timeout": 0, 00:29:32.715 "ctrlr_loss_timeout_sec": 0, 00:29:32.715 "reconnect_delay_sec": 0, 00:29:32.715 "fast_io_fail_timeout_sec": 0, 00:29:32.715 "disable_auto_failback": false, 00:29:32.715 "generate_uuids": false, 00:29:32.715 "transport_tos": 0, 00:29:32.715 "nvme_error_stat": false, 00:29:32.715 "rdma_srq_size": 0, 00:29:32.715 "io_path_stat": false, 00:29:32.715 "allow_accel_sequence": false, 00:29:32.715 "rdma_max_cq_size": 0, 00:29:32.715 "rdma_cm_event_timeout_ms": 0, 00:29:32.715 "dhchap_digests": [ 00:29:32.715 "sha256", 00:29:32.715 "sha384", 00:29:32.715 "sha512" 00:29:32.715 ], 00:29:32.715 "dhchap_dhgroups": [ 00:29:32.715 "null", 00:29:32.715 "ffdhe2048", 00:29:32.715 "ffdhe3072", 00:29:32.715 "ffdhe4096", 00:29:32.715 "ffdhe6144", 00:29:32.715 "ffdhe8192" 00:29:32.715 ] 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_nvme_attach_controller", 00:29:32.715 "params": { 00:29:32.715 "name": "nvme0", 00:29:32.715 "trtype": "TCP", 00:29:32.715 "adrfam": "IPv4", 00:29:32.715 "traddr": "127.0.0.1", 00:29:32.715 "trsvcid": "4420", 00:29:32.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.715 "prchk_reftag": false, 00:29:32.715 "prchk_guard": false, 00:29:32.715 "ctrlr_loss_timeout_sec": 0, 00:29:32.715 "reconnect_delay_sec": 0, 00:29:32.715 "fast_io_fail_timeout_sec": 0, 00:29:32.715 "psk": "key0", 00:29:32.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:32.715 "hdgst": false, 00:29:32.715 "ddgst": false 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_nvme_set_hotplug", 00:29:32.715 "params": { 00:29:32.715 "period_us": 100000, 00:29:32.715 "enable": false 00:29:32.715 } 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "method": "bdev_wait_for_examine" 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }, 00:29:32.715 { 00:29:32.715 "subsystem": "nbd", 00:29:32.715 "config": [] 00:29:32.715 } 00:29:32.715 ] 00:29:32.715 }' 00:29:32.715 [2024-05-15 00:11:33.227788] Starting SPDK v24.05-pre git sha1 52939f252 / DPDK 23.11.0 initialization... 00:29:32.715 [2024-05-15 00:11:33.227843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775910 ] 00:29:32.715 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.715 [2024-05-15 00:11:33.295004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.973 [2024-05-15 00:11:33.371021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.973 [2024-05-15 00:11:33.520795] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:33.537 00:11:34 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:33.537 00:11:34 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:33.537 00:11:34 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:33.537 00:11:34 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:33.537 00:11:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.795 00:11:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:33.795 00:11:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.795 00:11:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:33.795 00:11:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.795 00:11:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.052 00:11:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:34.052 00:11:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:34.052 00:11:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:34.052 00:11:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:34.310 00:11:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:34.310 00:11:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:34.310 00:11:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S7s0c2rFcw /tmp/tmp.cIa7m7x2nP 00:29:34.310 00:11:34 keyring_file -- keyring/file.sh@20 -- # killprocess 3775910 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3775910 ']' 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3775910 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3775910 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3775910' 00:29:34.310 killing process with pid 3775910 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@965 -- # kill 3775910 00:29:34.310 Received shutdown signal, test time was about 1.000000 seconds 00:29:34.310 00:29:34.310 Latency(us) 00:29:34.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.310 =================================================================================================================== 00:29:34.310 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:34.310 00:11:34 keyring_file -- common/autotest_common.sh@970 -- # wait 3775910 00:29:34.567 00:11:34 keyring_file -- keyring/file.sh@21 -- # killprocess 3774235 00:29:34.567 00:11:34 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3774235 ']' 00:29:34.567 00:11:34 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3774235 00:29:34.567 00:11:34 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:34.567 00:11:34 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:34.567 00:11:34 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3774235 00:29:34.567 00:11:35 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:34.567 00:11:35 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:34.567 00:11:35 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3774235' 00:29:34.567 killing process with pid 3774235 00:29:34.567 00:11:35 keyring_file -- common/autotest_common.sh@965 -- # kill 3774235 00:29:34.568 [2024-05-15 00:11:35.035025] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:34.568 [2024-05-15 00:11:35.035056] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:34.568 00:11:35 keyring_file -- common/autotest_common.sh@970 -- # wait 3774235 00:29:34.825 00:29:34.825 real 0m12.025s 00:29:34.825 user 0m27.445s 00:29:34.825 sys 0m3.387s 00:29:34.825 00:11:35 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:34.825 00:11:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:34.825 ************************************ 00:29:34.825 END TEST keyring_file 00:29:34.825 ************************************ 00:29:34.825 00:11:35 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:29:34.826 00:11:35 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:34.826 00:11:35 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:34.826 00:11:35 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:34.826 00:11:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:34.826 00:11:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:34.826 00:11:35 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:34.826 00:11:35 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:34.826 00:11:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:34.826 00:11:35 -- common/autotest_common.sh@10 -- # set +x 00:29:34.826 00:11:35 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:34.826 00:11:35 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:34.826 00:11:35 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:34.826 00:11:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.387 INFO: APP EXITING 00:29:41.387 INFO: killing all VMs 00:29:41.387 INFO: killing vhost app 00:29:41.387 INFO: EXIT DONE 00:29:43.914 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:43.914 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:29:46.598 Cleaning 00:29:46.598 Removing: /var/run/dpdk/spdk0/config 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:46.598 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:46.598 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:46.598 Removing: /var/run/dpdk/spdk1/config 00:29:46.598 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:46.598 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:46.598 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:46.598 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:46.598 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:46.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:46.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:46.599 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:46.599 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:46.599 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:46.599 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:46.599 Removing: /var/run/dpdk/spdk2/config 00:29:46.599 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:46.599 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:46.599 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:46.599 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:46.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:46.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:46.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:46.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:46.855 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:46.855 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:46.855 Removing: /var/run/dpdk/spdk3/config 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:46.855 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:46.855 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:46.855 Removing: /var/run/dpdk/spdk4/config 00:29:46.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:46.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:46.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:46.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:46.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:46.856 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:46.856 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:46.856 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:46.856 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:46.856 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:46.856 Removing: /dev/shm/bdev_svc_trace.1 00:29:46.856 Removing: /dev/shm/nvmf_trace.0 00:29:46.856 Removing: /dev/shm/spdk_tgt_trace.pid3394476 00:29:46.856 Removing: /var/run/dpdk/spdk0 00:29:46.856 Removing: /var/run/dpdk/spdk1 00:29:46.856 Removing: /var/run/dpdk/spdk2 00:29:46.856 Removing: /var/run/dpdk/spdk3 00:29:46.856 Removing: /var/run/dpdk/spdk4 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3392012 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3393250 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3394476 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3395175 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3396023 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3396287 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3397396 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3397489 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3397794 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3399516 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3400958 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3401279 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3401599 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3401938 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3402269 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3402554 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3402834 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3403092 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3404014 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3407104 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3407473 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3407767 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3407923 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3408372 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3408614 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3409183 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3409380 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3409587 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3409760 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3410050 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3410068 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3410688 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3410976 00:29:46.856 Removing: /var/run/dpdk/spdk_pid3411295 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3411571 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3411632 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3411721 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3411990 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3412270 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3412555 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3412841 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3413127 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3413503 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3413812 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3414156 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3414753 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3415107 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3415333 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3415581 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3415829 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3416054 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3416293 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3416568 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3416850 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3417140 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3417425 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3417717 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3417926 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3418362 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3422204 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3469180 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3473723 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3484354 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3489945 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3494447 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3494996 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3507218 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3507244 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3508051 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3508944 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3509990 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3510835 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3510953 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3511297 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3511542 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3511551 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3512350 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3513363 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3514214 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3514750 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3514791 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3515068 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3516431 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3517561 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3526303 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3526593 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3531118 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3537253 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3539979 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3550968 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3560861 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3562661 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3563716 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3581429 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3585450 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3590259 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3591860 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3593901 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3594035 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3594277 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3594545 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3595127 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3597133 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3598124 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3598698 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3601510 00:29:47.113 Removing: /var/run/dpdk/spdk_pid3602254 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3603026 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3607374 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3617872 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3621994 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3628286 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3629767 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3631276 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3635955 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3640375 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3648395 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3648503 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3653740 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3654002 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3654144 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3654543 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3654561 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3659343 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3659914 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3664540 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3667302 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3673157 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3678927 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3687855 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3695477 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3695522 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3714928 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3715725 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3716284 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3716991 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3717942 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3718496 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3719285 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3719841 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3724376 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3724653 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3731016 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3731258 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3733591 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3742187 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3742198 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3747743 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3749757 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3751768 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3752969 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3754993 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3756206 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3765639 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3766179 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3766711 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3769172 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3769708 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3770221 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3774235 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3774359 00:29:47.370 Removing: /var/run/dpdk/spdk_pid3775910 00:29:47.370 Clean 00:29:47.629 00:11:47 -- common/autotest_common.sh@1447 -- # return 0 00:29:47.629 00:11:47 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:47.629 00:11:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.629 00:11:47 -- common/autotest_common.sh@10 -- # set +x 00:29:47.629 00:11:48 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:47.629 00:11:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.629 00:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:47.629 00:11:48 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:47.629 00:11:48 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:47.629 00:11:48 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:47.629 00:11:48 -- spdk/autotest.sh@387 -- # hash lcov 00:29:47.629 00:11:48 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:47.629 00:11:48 -- spdk/autotest.sh@389 -- # hostname 00:29:47.629 00:11:48 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:47.629 geninfo: WARNING: invalid characters removed from testname! 00:30:09.544 00:12:08 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:10.921 00:12:11 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:12.298 00:12:12 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:14.202 00:12:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:16.106 00:12:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:17.484 00:12:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:19.419 00:12:19 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:19.419 00:12:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.419 00:12:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:19.419 00:12:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.419 00:12:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.419 00:12:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.419 00:12:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.419 00:12:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.419 00:12:19 -- paths/export.sh@5 -- $ export PATH 00:30:19.419 00:12:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.419 00:12:19 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:19.419 00:12:19 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:19.419 00:12:19 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715724739.XXXXXX 00:30:19.419 00:12:19 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715724739.UlHUP9 00:30:19.419 00:12:19 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:19.419 00:12:19 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:19.419 00:12:19 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:19.419 00:12:19 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:19.419 00:12:19 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:19.419 00:12:19 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:19.419 00:12:19 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:30:19.419 00:12:19 -- common/autotest_common.sh@10 -- $ set +x 00:30:19.419 00:12:19 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:19.419 00:12:19 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:19.419 00:12:19 -- pm/common@17 -- $ local monitor 00:30:19.420 00:12:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.420 00:12:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.420 00:12:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.420 00:12:19 -- pm/common@21 -- $ date +%s 00:30:19.420 00:12:19 -- pm/common@21 -- $ date +%s 00:30:19.420 00:12:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.420 00:12:19 -- pm/common@25 -- $ sleep 1 00:30:19.420 00:12:19 -- pm/common@21 -- $ date +%s 00:30:19.420 00:12:19 -- pm/common@21 -- $ date +%s 00:30:19.420 00:12:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724739 00:30:19.420 00:12:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724739 00:30:19.420 00:12:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724739 00:30:19.420 00:12:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715724739 00:30:19.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724739_collect-vmstat.pm.log 00:30:19.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724739_collect-cpu-temp.pm.log 00:30:19.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724739_collect-cpu-load.pm.log 00:30:19.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715724739_collect-bmc-pm.bmc.pm.log 00:30:20.361 00:12:20 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:20.361 00:12:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:20.361 00:12:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:20.361 00:12:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:20.361 00:12:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:20.361 00:12:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:20.361 00:12:20 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:20.361 00:12:20 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:20.361 00:12:20 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:20.361 00:12:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:20.361 00:12:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:20.361 00:12:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:20.361 00:12:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:20.361 00:12:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:20.361 00:12:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:20.361 00:12:20 -- pm/common@44 -- $ pid=3789591 00:30:20.361 00:12:20 -- pm/common@50 -- $ kill -TERM 3789591 00:30:20.361 00:12:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:20.361 00:12:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:20.361 00:12:20 -- pm/common@44 -- $ pid=3789593 00:30:20.361 00:12:20 -- pm/common@50 -- $ kill -TERM 3789593 00:30:20.361 00:12:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:20.361 00:12:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:20.361 00:12:20 -- pm/common@44 -- $ pid=3789595 00:30:20.361 00:12:20 -- pm/common@50 -- $ kill -TERM 3789595 00:30:20.361 00:12:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:20.361 00:12:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:20.361 00:12:20 -- pm/common@44 -- $ pid=3789623 00:30:20.361 00:12:20 -- pm/common@50 -- $ sudo -E kill -TERM 3789623 00:30:20.361 + [[ -n 3283164 ]] 00:30:20.361 + sudo kill 3283164 00:30:20.371 [Pipeline] } 00:30:20.388 [Pipeline] // stage 00:30:20.393 [Pipeline] } 00:30:20.410 [Pipeline] // timeout 00:30:20.415 [Pipeline] } 00:30:20.434 [Pipeline] // catchError 00:30:20.439 [Pipeline] } 00:30:20.455 [Pipeline] // wrap 00:30:20.462 [Pipeline] } 00:30:20.477 [Pipeline] // catchError 00:30:20.486 [Pipeline] stage 00:30:20.488 [Pipeline] { (Epilogue) 00:30:20.503 [Pipeline] catchError 00:30:20.505 [Pipeline] { 00:30:20.522 [Pipeline] echo 00:30:20.524 Cleanup processes 00:30:20.532 [Pipeline] sh 00:30:20.820 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:20.820 3789715 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:20.820 3790040 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:20.833 [Pipeline] sh 00:30:21.115 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:21.115 ++ grep -v 'sudo pgrep' 00:30:21.115 ++ awk '{print $1}' 00:30:21.115 + sudo kill -9 3789715 00:30:21.126 [Pipeline] sh 00:30:21.407 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:21.407 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:25.601 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:29.804 [Pipeline] sh 00:30:30.085 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:30.085 Artifacts sizes are good 00:30:30.100 [Pipeline] archiveArtifacts 00:30:30.107 Archiving artifacts 00:30:30.263 [Pipeline] sh 00:30:30.548 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:30.562 [Pipeline] cleanWs 00:30:30.570 [WS-CLEANUP] Deleting project workspace... 00:30:30.570 [WS-CLEANUP] Deferred wipeout is used... 00:30:30.576 [WS-CLEANUP] done 00:30:30.579 [Pipeline] } 00:30:30.636 [Pipeline] // catchError 00:30:30.653 [Pipeline] sh 00:30:30.933 + logger -p user.info -t JENKINS-CI 00:30:30.941 [Pipeline] } 00:30:30.955 [Pipeline] // stage 00:30:30.960 [Pipeline] } 00:30:30.977 [Pipeline] // node 00:30:30.982 [Pipeline] End of Pipeline 00:30:31.015 Finished: SUCCESS